Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
194 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Unveiling The Factors of Aesthetic Preferences with Explainable AI (2311.14410v2)

Published 24 Nov 2023 in cs.LG

Abstract: The allure of aesthetic appeal in images captivates our senses, yet the underlying intricacies of aesthetic preferences remain elusive. In this study, we pioneer a novel perspective by utilizing several different ML models that focus on aesthetic attributes known to influence preferences. Our models process these attributes as inputs to predict the aesthetic scores of images. Moreover, to delve deeper and obtain interpretable explanations regarding the factors driving aesthetic preferences, we utilize the popular Explainable AI (XAI) technique known as SHapley Additive exPlanations (SHAP). Our methodology compares the performance of various ML models, including Random Forest, XGBoost, Support Vector Regression, and Multilayer Perceptron, in accurately predicting aesthetic scores, and consistently observing results in conjunction with SHAP. We conduct experiments on three image aesthetic benchmarks, namely Aesthetics with Attributes Database (AADB), Explainable Visual Aesthetics (EVA), and Personalized image Aesthetics database with Rich Attributes (PARA), providing insights into the roles of attributes and their interactions. Finally, our study presents ML models for aesthetics research, alongside the introduction of XAI. Our aim is to shed light on the complex nature of aesthetic preferences in images through ML and to provide a deeper understanding of the attributes that influence aesthetic judgements.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (52)
  1. Deng, Y., Loy, C.C., Tang, X.: Image aesthetic assessment: An experimental survey. IEEE Signal Processing Magazine, 80–106 (2017) Hoenig [2005] Hoenig, F.: Defining computational aesthetics. Computational aesthetics in graphics, visualization and imaging, 13–18 (2005) Brachmann and Redies [2017] Brachmann, A., Redies, C.: Computational and experimental approaches to visual aesthetics. Frontiers in Computational Neuroscience 11 (2017) Valenzise et al. [2022] Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hoenig, F.: Defining computational aesthetics. Computational aesthetics in graphics, visualization and imaging, 13–18 (2005) Brachmann and Redies [2017] Brachmann, A., Redies, C.: Computational and experimental approaches to visual aesthetics. Frontiers in Computational Neuroscience 11 (2017) Valenzise et al. [2022] Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Brachmann, A., Redies, C.: Computational and experimental approaches to visual aesthetics. Frontiers in Computational Neuroscience 11 (2017) Valenzise et al. [2022] Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  2. Hoenig, F.: Defining computational aesthetics. Computational aesthetics in graphics, visualization and imaging, 13–18 (2005) Brachmann and Redies [2017] Brachmann, A., Redies, C.: Computational and experimental approaches to visual aesthetics. Frontiers in Computational Neuroscience 11 (2017) Valenzise et al. [2022] Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Brachmann, A., Redies, C.: Computational and experimental approaches to visual aesthetics. Frontiers in Computational Neuroscience 11 (2017) Valenzise et al. [2022] Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  3. Brachmann, A., Redies, C.: Computational and experimental approaches to visual aesthetics. Frontiers in Computational Neuroscience 11 (2017) Valenzise et al. [2022] Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  4. Valenzise, G., Kang, C., Dufaux, F.: Advances and challenges in computational image aesthetics. Human perception of visual information: psychological and computational perspectives (2022) Lu et al. [2014] Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  5. Lu, X., Lin, Z., Jin, H., Yang, J., Wang, J.Z.: RAPID: Rating pictorial aesthetics using deep learning. Proceedings of the 22nd ACM International Conference on Multimedia, 457–466 (2014) Talebi and Milanfar [2018] Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  6. Talebi, H., Milanfar, P.: NIMA: Neural image assessment. IEEE Transactions on Image Processing 27(8), 3998–4011 (2018) Pan et al. [2019] Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  7. Pan, B., Wang, S., Jiang, Q.: Image aesthetic assessment assisted by attributes through adversarial learning. Proceedings of the AAAI Conference on Artificial Intelligence 33, 679–686 (2019) Soydaner and Wagemans [2023] Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  8. Soydaner, D., Wagemans, J.: Multi-task convolutional neural network for image aesthetic assessment. arXiv preprint arXiv: 2305.09373 (2023) Celona et al. [2022] Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  9. Celona, L., Leonardi, M., Napoletano, P., Rozza, A.: Composition and style attributes guided image aesthetic assessment. IEEE Transactions on Image Processing 31, 5009–5024 (2022) Li et al. [2023a] Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  10. Li, L., Huang, Y., Wu, J., Yang, Y., Li, Y., Guo, Y., Shi, G.: Theme-aware visual attribute reasoning for image aesthetics assessment. IEEE Transactions on Circuits and Systems for Video Technology (2023) Li et al. [2023b] Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  11. Li, L., Zhu, T., Chen, P., Yang, Y., Li, Y., Lin, W.: Image aesthetic assessment with attribute-assisted multimodal memory network. IEEE Transactions on Circuits and Systems for Video Technology (2023) Lundberg and Lee [2017] Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  12. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems 30 (2017) Drucker et al. [1996] Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  13. Drucker, H., Burges, C.J.C., Kaufman, L., Smola, A., Vapnik, V.: Support vector regression machines. Advances in Neural Information Processing Systems 9 (1996) Boser et al. [1992] Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  14. Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. Proceedings of the Fifth Annual Workshop on Computational Learning Theory, 144–152 (1992) Ho [1995] Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  15. Ho, T.K.: Random decision forests. Proceedings of Third International Conference on Document Analysis and Recognition 1, 278–282 (1995) Alpaydın [2014] Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  16. Alpaydın, E.: Introduction to machine learning. The MIT Press (2014) Géron [2017] Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  17. Géron, A.: Hands-On Machine Learning with Scikit-Learn & Tensorflow. O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472 (2017) Breiman [2001] Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  18. Breiman, L.: Random forests. Machine Learning 45, 5–32 (2001) Breiman [1996] Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  19. Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996) Chen and Guestrin [2016] Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  20. Chen, T., Guestrin, C.: XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794 (2016) Schapire [2017] Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  21. Schapire, R.E.: The strenght of weak learnability. Machine Learning 5, 197–227 (2017) Friedman [2001] Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  22. Friedman, J.: Greedy function approximation: a gradient boosting machine. Annals of Statistics 29(5), 1189–1232 (2001) Goodfellow et al. [2016] Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  23. Goodfellow, I., Bengio, Y., Courville, A.: Deep learning. MIT Press (2016) Schőlkopf and Smola [2002] Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  24. Schőlkopf, B., Smola, A.J.: Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT Press (2002) Rumelhart et al. [1986] Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  25. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature, 533–536 (1986) Yuksel et al. [2021] Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  26. Yuksel, E., Soydaner, D., Bahtiyar, H.: Nuclear binding energy predictions using neural networks: Application of the multilayer perceptron. International Journal of Modern Physics E 30(03), 2150017 (2021) Ouyang et al. [2022] Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  27. Ouyang, L., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022) Krizhevsky et al. [2012] Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  28. Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012) Ramesh et al. [2021] Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  29. Ramesh, A., et al.: Zero-shot text-to-image generation. International conference on machine learning, 8821–8831 (2021) Biran and Cotton [2017] Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  30. Biran, O., Cotton, C.: Explanation and justification in machine learning: A survey. IJCAI-17 Workshop on Explainable AI (XAI) 8(1), 8–13 (2017) Linardatos et al. [2020] Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  31. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: A review of machine learning interpretability methods. Entropy 23 (2020) Gohel et al. [2021] Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  32. Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv: 2107.07045 (2021) Holzinger et al. [2022] Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  33. Holzinger, A., Saranti, A., Molnar, C., Biecek, P., Samek, W.: Explainable AI methods - a brief overview. Lecture Notes in Computer Science 13200 (2022) Ribeiro et al. [2016] Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  34. Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144 (2016) Shrikumar et al. [2017] Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  35. Shrikumar, A., Greenside, P., Kundaje, A.: Learning important features through propagating activation differences. International Conference on Machine Learning 70, 3145–3153 (2017) Shapley [1953] Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  36. Shapley, L.S.: A value for n-person games. Contributions to the Theory of Games, 307–317 (1953) Winter [2002] Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  37. Winter, E.: The shapley value. Handbook of Game Theory with Economic Applications 3, 2025–2054 (2002) Molnar [2022] Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  38. Molnar, C.: Interpretable machine learning: A guide for making black box models explainable. Independently published (2022) den Broeck et al. [2022] Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  39. Broeck, G.V., Lykov, A., Schleich, M., Suciu, D.: On the tractability of SHAP explanations. Journal of Artificial Intelligence Research 74, 851–886 (2022) Lahiri et al. [2022] Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  40. Lahiri, A., Alipour, K., Adeli, E., Salimi, B.: Combining counterfactuals with shapley values to explain image models. International Conference on Machine Learning (2022) Kong et al. [2016] Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  41. Kong, S., Shen, X., Lin, Z., Mech, R., Fowlkes, C.: Photo aesthetic ranking network with attributes and content adaptation. European Conference on Computer Vision, 662–679 (2016) Kang et al. [2020] Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  42. Kang, C., Valenzise, G., Dufaux, F.: EVA: An explainable visual aesthetics dataset. Joint Workshop on Aesthetic and Technical Quality Assessment of Multimedia and Media Analytics for Societal Trends, 5–13 (2020) Shaham et al. [2021] Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  43. Shaham, U., Zaidman, I., Svirsky, J.: Deep ordinal regression using optimal transport loss and unimodal output probabilities. arXiv preprint arXiv:2011.07607 (2021) Li et al. [2023] Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  44. Li, L., Zhi, T., Shi, G., Yang, Y., Xu, L., Li, Y., Guo, Y.: Anchor-based knowledge embedding for image aesthetics assessment. Neurocomputing (2023) Duan et al. [2022] Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  45. Duan, J., Chen, P., Li, L., Wu, J., Shi, G.: Semantic attribute guided image aesthetics assessment. IEEE International Conference on Visual Communications and Image Processing (VCIP), 1–5 (2022) Yang et al. [2022] Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  46. Yang, Y., Xu, L., Li, L., Q., N., Li, Y., Zhang, P., Guo, Y.: Personalized image aesthetics assessment with rich attributes. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 19861–19869 (2022) Fang et al. [2020] Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  47. Fang, Y., Zhu, H., Zeng, Y., Ma, K., Wang, Z.: Perceptual quality assessment of smartphone photography. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3677–3686 (2020) Hosu et al. [2020] Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  48. Hosu, V., Lin, H., Sziranyi, T., Saupe, D.: KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment. IEEE Transactions on Image Processing 29, 4041–4056 (2020) Glorot et al. [2011] Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  49. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, 315–323 (2011) Glorot and Bengio [2010] Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  50. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–256 (2010) Kingma and Ba [2014] Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  51. Kingma, D., Ba, J.: A method for stochastic optimization. arXiv preprint arXiv: 1412.6980 (2014) Soydaner [2020] Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020) Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
  52. Soydaner, D.: A comparison of optimization algorithms for deep learning. International Journal of Pattern Recognition and Artificial Intelligence 34(13), 2052013 (2020)
Citations (2)

Summary

We haven't generated a summary for this paper yet.