Can Fairness be Automated? Guidelines and Opportunities for Fairness-aware AutoML (2303.08485v2)
Abstract: The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of ML systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness-aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness-related harm can arise and the ensuing implications for the design of fairness-aware AutoML. We conclude that while fairness cannot be automated, fairness-aware AutoML can play an important role in the toolbox of ML practitioners. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user-centered assistive systems designed to tackle challenges encountered in fairness work
- A reductions approach to fair classification. In Dy, J., and Krause, A. (Eds.), Proceedings of the 35th International Conference on Machine Learning (ICML’18), Vol. 80, pp. 60–69. Proceedings of Machine Learning Research.
- Debiasing classifiers: is reality at variance with expectation?. arXiv, 2011.02407 [cs.LG].
- Machine Bias.. ProPublica.
- JAHS-Bench-201: A Foundation For Research On Joint Architecture And Hyperparameter Search. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks.
- It’s COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks. In Vanschoren, J., and Yeung, S. (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. Curran Associates.
- Designing Disaggregated Evaluations of AI Systems: Choices, Considerations, and Tradeoffs. In Fourcade, M., Kuipers, B., Lazar, S., and Mulligan, D. (Eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES’21).
- Fairness and Machine Learning. fairmlbook.org. http://www.fairmlbook.org.
- Big data’s disparate impact. California Law Review, 104, 671.
- AI Fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM Journal of Research and Development, 63(4/5), 4:1–15.
- A Comprehensive Survey on Hardware-Aware Neural Architecture Search. arxiv, 2101.09336 [cs.LG].
- Algorithms for Hyper-Parameter Optimization. In Shawe-Taylor, J., Zemel, R., Bartlett, P., Pereira, F., and Weinberger, K. (Eds.), Proceedings of the 24th International Conference on Advances in Neural Information Processing Systems (NeurIPS’11), pp. 2546–2554. Curran Associates.
- Random Search for Hyper-Parameter Optimization. Journal of Machine Learning Research, 13, 281–305.
- Efficient Parameter Importance Analysis via Ablation with Surrogates. In S.Singh, and Markovitch, S. (Eds.), Proceedings of the Thirty-First Conference on Artificial Intelligence (AAAI’17), pp. 773–779. AAAI Press.
- CAVE: Configuration Assessment, Visualization and Evaluation. In Battiti, R., Brunato, M., Kotsireas, I., and Pardalos, P. (Eds.), Proceedings of the International Conference on Learning and Intelligent Optimization (LION), Lecture Notes in Computer Science. Springer.
- Accounting for Variance in Machine Learning Benchmarks. In Smola, A., Dimakis, A., and Stoica, I. (Eds.), Proceedings of Machine Learning and Systems 3, Vol. 3, pp. 747–769.
- Survey of machine-learning experimental methods at NeurIPS2019 and ICLR2020. Research report hal-02447823, Inria Saclay Ile de France.
- Breimann, L. (2001). Random Forests. Machine Learning Journal, 45, 5–32.
- Gender shades: Intersectional accuracy disparities in commercial gender classification. In Barocas, S. (Ed.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT’18), pp. 77–91. ACM Press.
- Building Classifiers with Independency Constraints. In Saygin, Y., Yu, J., Kargupta, H., Wang, W., Ranka, S., Yu, P., and Wu, X. (Eds.), ICDM Workshops 2009, IEEE International Conference on Data Mining Workshops, Miami, Florida, USA, 6 December 2009, pp. 13–18. IEEE Computer Society.
- Three naive Bayes approaches for discrimination-free classification. Data Mining and Knowledge Discovery, 21(2), 277–292.
- Fairness in Machine Learning: A Survey. ACM Comput. Surv., .
- Software Engineering for Fairness: A Case Study with Hyperparameter Optimization. In Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering (ASE). IEEE.
- Fairness under unawareness: Assessing disparity when protected class is unobserved. In d. boyd, and Morgenstern, J. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT’19), pp. 339–348. ACM Press.
- XGBoost: A Scalable Tree Boosting System. In Krishnapuram, B., Shah, M., Smola, A., Aggarwal, C., Shen, D., and Rastogi, R. (Eds.), Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’16), pp. 785–794. ACM Press.
- Bayesian Optimization in AlphaGo. arxiv, 1812.06855 [cs.LG].
- Chouldechova, A. (2017). Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments. Big Data, 5(2), 153–163.
- Emergent Unfairness in Algorithmic Fairness-Accuracy Trade-Off Research. In Fourcade, M., Kuipers, B., Lazar, S., and Mulligan, D. (Eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES’21), pp. 46–54.
- The measure and mismeasure of fairness: A critical review of fair machine learning. arXiv, 1808.00023 [cs.CY].
- Algorithmic Decision Making and the Cost of Fairness. In Matwin, S., Yu, S., and Farooq, F. (Eds.), Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’17), pp. 797––806. ACM Press.
- Fits and starts: Enterprise use of AutoML and the role of humans in the loop. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp. 1–15.
- Promoting Fairness through Hyperparameter Optimization. In Proceedings of the IEEE International Conference on Data Mining (ICDM’21), pp. 1036–1041. IEEE.
- Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters, https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
- Automating Data Science. Communications of the ACM, 65(3), 76–87.
- The Benchmark Lottery. arXiv, 2107.07002 [cs.LG].
- Exploring How Machine Learning Practitioners (Try To) Use Fairness Toolkits. In Isbell, C., Lazar, S., Oh, A., and Xiang, A. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’22), pp. 473––484. ACM Press.
- Retiring Adult: New Datasets for Fair Machine Learning. In Ranzato, M., Beygelzimer, A., Nguyen, K., Liang, P., Vaughan, J., and Dauphin, Y. (Eds.), Proceedings of the 34th International Conference on Advances in Neural Information Processing Systems (NeurIPS’21). Curran Associates.
- Empirical Risk Minimization Under Fairness Constraints. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (Eds.), Advances in Neural Information Processing Systems. Curran Associates.
- Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face Recognition. In Thirty-seventh Conference on Neural Information Processing Systems.
- Fairness through awareness. In Proceedings of the 3rd innovations in theoretical computer science conference, pp. 214–226.
- Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution. In Proceedings of the International Conference on Learning Representations (ICLR’19). Published online: iclr.cc.
- Neural Architecture Search. In Hutter et al. (?), chap. 3, pp. 63–77. Available for free at http://automl.org/book.
- Runaway feedback loops in predictive policing. In Barocas, S. (Ed.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT’18), pp. 160–171. ACM Press.
- AutoGluon-Tabular: Robust and Accurate AutoML for Structured Data. arxiv, 2003.06505 [stat.ML].
- Escalante, H. (2021). Automated Machine Learning–A Brief Review at the End of the Early Years. In Pillay, N., and Qu, R. (Eds.), Automated Design of Machine Learning and Search Algorithms, pp. 11–28. Springer.
- Particle Swarm Model Selection. Journal of Machine Learning Research, 10, 405–440.
- Algorithmic Fairness Datasets: The Story so Far. Data Mining and Knowledge Discovery, 36(6), 2074–2152.
- Analysing differences between algorithm configurations through ablation. Journal of Heuristics, 22(4), 431–458.
- Certifying and Removing Disparate Impact. In Cao, L., and Zhang, C. (Eds.), Proceedings of the 21nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’15), pp. 259–268. ACM Press.
- Mind the Gap: Measuring Generalization Performance Across Multiple Objectives. In Crémilleux, B., , Hess, S., and Nijssen, S. (Eds.), Advances in Intelligent Data Analysis XXI, pp. 130–142 Cham. Springer Nature Switzerland.
- Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning. Journal of Machine Learning Research, 23(261), 1 – 61.
- Hyperparameter Optimization. In Hutter et al. (?), chap. 1, pp. 3 – 38. Available for free at http://automl.org/book.
- Efficient and Robust Automated Machine Learning. In Cortes, C., Lawrence, N., Lee, D., Sugiyama, M., and Garnett, R. (Eds.), Proceedings of the 28th International Conference on Advances in Neural Information Processing Systems (NeurIPS’15), pp. 2962–2970. Curran Associates.
- Garnett, R. (2022). Bayesian Optimization. Cambridge University Press. in preparation.
- Datasheets for datasets. Communications of the ACM, 64(12), 86–92.
- AMLB: an AutoML Benchmark. arXiv, 2207.12560 [cs.LG].
- Google Vizier: A Service for Black-Box Optimization. In Matwin, S., Yu, S., and Farooq, F. (Eds.), Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’17), pp. 1487–1495. ACM Press.
- Grömping, U. (2019). South German credit data: Correcting a widely used data set. TechReport, 4, 2019.
- Analysis of the AutoML Challenge Series 2015-2018. In Hutter et al. (?), chap. 10, pp. 177–219. Available for free at http://automl.org/book.
- Towards a Critical Race Methodology in Algorithmic Fairness. In Castillo, C., and Hildebrandt, M. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT’20), pp. 501––512. ACM Press.
- Equality of opportunity in supervised learning. In Lee, D., Sugiyama, M., von Luxburg, U., Guyon, I., and Garnett, R. (Eds.), Proceedings of the 29th International Conference on Advances in Neural Information Processing Systems (NeurIPS’16), pp. 3323–3331. Curran Associates.
- Hedden, B. (2021). On statistical criteria of algorithmic fairness. Philosophy and Public Affairs, 49(2).
- Hellman, D. (2020). Measuring algorithmic fairness. Virginia Law Review, 106(4), 811–866.
- Deep reinforcement learning that matters. In McIlraith, S., and Weinberger, K. (Eds.), Proceedings of the Thirty-Second Conference on Artificial Intelligence (AAAI’18). AAAI Press.
- A general framework for constrained Bayesian optimization using information-based search. Journal of Machine Learning Research, 17(1), 5549–5601.
- On the Moral Justification of Statistical Parity. In Elish, M., Isaac, W., and Zemel, R. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’21), pp. 747––757. ACM Press.
- NAS-HPO-Bench-II: A Benchmark Dataset on Joint Optimization of Convolutional Neural Network Architecture and Training Hyperparameters. In Balasubramanian, V., and Tsang, I. (Eds.), Proceedings of The 13th Asian Conference on Machine Learning, Vol. 157 of Proceedings of Machine Learning Research, pp. 1349–1364. PMLR.
- Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19), p. 1–16. ACM Press.
- Hooker, J. (1995). Testing heuristics: We have it all wrong. Journal of Heuristics, 1(1), 33–42.
- Multi-objective parameter configuration of machine learning algorithms using model-based optimization. In Likas, A. (Ed.), 2016 IEEE Symposium Series on Computational Intelligence (SSCI), pp. 1–8. IEEE.
- Sequential Model-Based Optimization for General Algorithm Configuration. In Coello, C. (Ed.), Proceedings of the Fifth International Conference on Learning and Intelligent Optimization (LION’11), Vol. 6683 of Lecture Notes in Computer Science, pp. 507–523. Springer.
- An Efficient Approach for Assessing Hyperparameter Importance. In Xing, E., and Jebara, T. (Eds.), Proceedings of the 31th International Conference on Machine Learning, (ICML’14), pp. 754–762. Omnipress.
- Identifying Key Algorithm Parameters and Instance Features using Forward Selection. In Nicosia, G., and Pardalos, P. (Eds.), Proceedings of the 7th International Conference on Learning and Optimization (LION-7), Lecture Notes in Computer Science. Springer Berlin Heidelberg.
- Automated Machine Learning: Methods, Systems, Challenges. Springer. Available for free at http://automl.org/book.
- Igel, C. (2005). Multi-objective Model Selection for Support Vector Machines. In Coello, C., Aguirre, A., and Zitzler, E. (Eds.), Evolutionary Multi-Criterion Optimization, pp. 534–546. Springer.
- Measurement and fairness. In Elish, M., Isaac, W., and Zemel, R. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’21), pp. 375–385. ACM Press.
- Non-stochastic Best Arm Identification and Hyperparameter Optimization. In Gretton, A., and Robert, C. (Eds.), Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics (AISTATS’16), Vol. 51. Proceedings of Machine Learning Research.
- Efficient Global Optimization of Expensive Black Box Functions. Journal of Global Optimization, 13, 455–492.
- Data preprocessing techniques for classification without discrimination. Knowledge and Information Systems, 33(1), 1–33.
- Discrimination Aware Decision Tree Learning. In Webb, G., Liu, B., Zhang, C., Gunopulos, D., and Wu, X. (Eds.), ICDM 2010, The 10th IEEE International Conference on Data Mining, Sydney, Australia, 14-17 December 2010, pp. 869–874. IEEE Computer Society.
- Techniques for Discrimination-Free Predictive Models. In Custers, B., Calders, T., Schermer, B., and Zarsky, T. (Eds.), Discrimination and Privacy in the Information Society - Data Mining and Profiling in Large Databases, Vol. 3 of Studies in Applied Philosophy, Epistemology and Rational Ethics, pp. 223–239. Springer.
- Multi-Objective Hyperparameter Optimization in Machine Learning – An Overview. ACM Trans. Evol. Learn. Optim., .
- Almost Optimal Exploration in Multi-Armed Bandits. In Dasgupta, S., and McAllester, D. (Eds.), Proceedings of the 30th International Conference on Machine Learning (ICML’13), pp. 1238–1246. Omnipress.
- Sensible AI: Re-Imagining Interpretability and Explainability Using Sensemaking Theory. In Isbell, C., Lazar, S., Oh, A., and Xiang, A. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’22), p. 702–714. ACM Press.
- Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning. In Bernhaupt, R., Mueller, F., Verweij, D., and Andres, J. (Eds.), Proceedings of the Conference on Human Factors in Computing Systems (CHI’20), pp. 1–14. ACM Press.
- LightGBM: A Highly Efficient Gradient Boosting Decision Tree. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (Eds.), Proceedings of the 30th International Conference on Advances in Neural Information Processing Systems (NeurIPS’17). Curran Associates.
- Inherent Trade-Offs in the Fair Determination of Risk Scores. In Papadimitriou, C. (Ed.), 8th Innovations in Theoretical Computer Science Conference (ITCS 2017), Vol. 67 of Leibniz International Proceedings in Informatics (LIPIcs), pp. 43:1–43:23 Dagstuhl, Germany. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
- Knowles, J. (2006). ParEGO: a hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization problems. IEEE Transactions on Evolutionary Computation, 10(1), 50–66.
- Counterfactual Fairness. In Guyon, I., von Luxburg, U., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (Eds.), Proceedings of the 30th International Conference on Advances in Neural Information Processing Systems (NeurIPS’17). Curran Associates.
- Auditing the AI auditors: A framework for evaluating fairness and bias in high stakes AI predictive models. American Psychologist, 1.
- Langley, P. (1996). Relevance and Insight in Experimental Studies. IEEE Expert Online, 11, 11–12.
- Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. Journal of Machine Learning Research, 18(185), 1–52.
- DARTS: Differentiable Architecture Search. In Proceedings of the International Conference on Learning Representations (ICLR’19). Published online: iclr.cc.
- Delayed Impact of Fair Machine Learning. In Dy, J., and Krause, A. (Eds.), Proceedings of the 35th International Conference on Machine Learning (ICML’18), Vol. 80, pp. 3150–3158. Proceedings of Machine Learning Research.
- An ADMM Based Framework for AutoML Pipeline Configuration. In Rossi, F., Conitzer, V., and Sha, F. (Eds.), Proceedings of the Thirty-Fourth Conference on Artificial Intelligence (AAAI’20), pp. 4892–4899. Association for the Advancement of Artificial Intelligence, AAAI Press.
- To predict and serve?. Significance, 13(5), 14–19.
- De-Biasing “Bias” Measurement. In Isbell, C., Lazar, S., Oh, A., and Xiang, A. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’22), p. 379–389. ACM Press.
- Assessing the Fairness of AI Systems: AI Practitioners’ Processes, Challenges, and Needs for Support. Proceedings of the ACM on Human-Computer Interaction, 6(CSCW1), 1–26.
- Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Bernhaupt, R., Mueller, F., Verweij, D., and Andres, J. (Eds.), Proceedings of the Conference on Human Factors in Computing Systems (CHI’20), pp. 1–14. ACM Press.
- A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6).
- Model cards for model reporting. In d. boyd, and Morgenstern, J. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT’19), pp. 220–229. ACM Press.
- Algorithmic fairness: Choices, assumptions, and definitions. Annual Review of Statistics and Its Application, 8, 141–163.
- General pitfalls of model-agnostic interpretation methods for machine learning models. In International Workshop on Extending Explainable AI Beyond Deep Models and Classifiers, pp. 39–68. Springer.
- Explaining Hyperparameter Optimization via Partial Dependence Plots. In Ranzato, M., Beygelzimer, A., Nguyen, K., Liang, P., Vaughan, J., and Dauphin, Y. (Eds.), Proceedings of the 34th International Conference on Advances in Neural Information Processing Systems (NeurIPS’21), pp. 2280–2291. Curran Associates.
- A survey on multi-objective hyperparameter optimization algorithms for Machine Learning. Artificial Intelligence Review, .
- Finding Faster Configurations Using FLASH. IEEE Transactions on Software Engineering, 46(7), 794–811.
- Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366, 447–453.
- Evaluation of a Tree-based Pipeline Optimization Tool for Automating Data Science. In Friedrich, T. (Ed.), Proceedings of the Genetic and Evolutionary Computation Conference (GECCO’16), pp. 485–492. ACM Press.
- Fair Bayesian Optimization. In Fourcade, M., Kuipers, B., Lazar, S., and Mulligan, D. (Eds.), Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES’21), pp. 854––863.
- Multi-Objective Automatic Machine Learning with AutoxgboostMC. arXiv, 1908.10796 [stat.ML].
- Tunability: Importance of Hyperparameters of Machine Learning Algorithms. Journal of Machine Learning Research, 20(53), 1–32.
- A survey on datasets for fairness-aware machine learning. WIREs Data Mining and Knowledge Discovery, 12(3), e1452.
- AI and the Everything in the Whole Wide World Benchmark. In Vanschoren, J., and Yeung, S. (Eds.), Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks. Curran Associates.
- Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. In Castillo, C., and Hildebrandt, M. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT’20). ACM Press.
- Automated Machine Learning with Monte-Carlo Tree Search. In Kraus, S. (Ed.), Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI’19), pp. 3296–3303.
- Raschka, S. (2018). Model Evaluation, Model Selection, and Algorithm Selection in Machine Learning. arxiv, 1811.12808 [stat.ML].
- Aging Evolution for Image Classifier Architecture Search. In Hentenryck, P. V., and Zhou, Z. (Eds.), Proceedings of the Thirty-Third Conference on Artificial Intelligence (AAAI’19). AAAI Press.
- Do ImageNet Classifiers Generalize to ImageNet?. In Chaudhuri, K., and Salakhutdinov, R. (Eds.), Proceedings of the 36th International Conference on Machine Learning (ICML’19), Vol. 97, pp. 5389–5400. Proceedings of Machine Learning Research.
- Can We Trust Fair-AI?. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, pp. 15421–15430.
- Learning in the “Real World”. Machine Learning, 30(2–3), 133–163.
- Multi-Objective Multi-Fidelity Hyperparameter Optimization with Application to Fairness. In Calandra, R., Clune, J., Grant, E., Schwarz, J., Vanschoren, J., Visin, F., and Wang, J. (Eds.), NeurIPS 2020 Workshop on Meta-Learning.
- Multi-objective Asynchronous Successive Halving. arXiv, 2106.12639 [stat.ML].
- The Long Arc of Fairness: Formalisations and Ethical Discourse. In Isbell, C., Lazar, S., Oh, A., and Xiang, A. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAccT’22). ACM Press.
- Fairness and abstraction in sociotechnical systems. In d. boyd, and Morgenstern, J. (Eds.), Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT’19), pp. 59–68. ACM Press.
- Operationalizing Machine Learning: An Interview Study. arXiv, 2209.09125 [cs.SE].
- Shearer, C. (2000). The CRISP-DM model: the new blueprint for data mining. Journal of data warehousing, 5(4), 13–22.
- Does automation bias decision-making?. International Journal of Human-Computer Studies, 51(5), 991–1006.
- Practical Bayesian Optimization of Machine Learning Algorithms. In Bartlett, P., Pereira, F., Burges, C., Bottou, L., and Weinberger, K. (Eds.), Proceedings of the 25th International Conference on Advances in Neural Information Processing Systems (NeurIPS’12), pp. 2960–2968. Curran Associates.
- Scalable Bayesian Optimization Using Deep Neural Networks. In Bach, F., and Blei, D. (Eds.), Proceedings of the 32nd International Conference on Machine Learning (ICML’15), Vol. 37, pp. 2171–2180. Omnipress.
- A Framework for Understanding Sources of Harm throughout the Machine Learning Life Cycle. In Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO ’21). ACM Press.
- Tatman, R. (2017). Gender and dialect bias in YouTube’s automatic captions. In Proceedings of the first ACL workshop on ethics in natural language processing, pp. 53–59.
- Auto-WEKA: combined selection and Hyperparameter Optimization of classification algorithms. In Dhillon, I., Koren, Y., Ghani, R., Senator, T., Bradley, P., Parekh, R., He, J., Grossman, R., and Uthurusamy, R. (Eds.), The 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’13), pp. 847–855. ACM Press.
- Hyperparameter Importance Across Datasets. In Guo, Y., and Farooq, F. (Eds.), Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD’18), pp. 2367–2376. ACM Press.
- Bias preservation in machine learning: the legality of fairness metrics under EU non-discrimination law. West Virginia Law Review, 123.
- Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law & Security Review, 41, 105567.
- Wagstaff, K. (2012). Machine Learning that Matters. arXiv, 1206.4656 [cs.LG].
- FLAML: A Fast and Lightweight AutoML Library. In Smola, A., Dimakis, A., and Stoica, I. (Eds.), Proceedings of Machine Learning and Systems 3, Vol. 3, pp. 434–447.
- Human-AI Collaboration in Data Science: Exploring Data Scientists’ Perceptions of Automated AI. Proc. ACM Hum.-Comput. Interact., 3.
- The four-fifths rule is not disparate impact: a woeful tale of epistemic trespassing in algorithmic fairness. arXiv, 2202.09519 [cs.CY].
- Importance of tuning hyperparameters of machine learning algorithms. arXiv, 2007.07588 [stat.ML].
- Does the End Justify the Means? On the Moral Justification of Fairness-Aware Machine Learning. arXiv, 2202.08536 [cs.LG].
- Fairlearn: Assessing and Improving Fairness of AI Systems. Journal of Machine Learning Research, 24(257), 1–8.
- Neural Architecture Search: Insights from 1000 Papers. arXiv, 2301.08727.pdf [cs.LG].
- Fair AutoML. arXiv, 2111.06495v1 [cs.LG].
- FairAutoML: Embracing Unfairness Mitigation in AutoML. arXiv, 2111.06495v2 [cs.LG].
- Whither AutoML? Understanding the Role of Automation in Machine Learning Workflows. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, CHI ’21.
- Fairness Constraints: Mechanisms for Fair Classification. In Singh, A., and Zhu, J. (Eds.), Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics (AISTATS’17), Vol. 54, pp. 962–970. Proceedings of Machine Learning Research.
- Towards Automated Deep Learning: Efficient Joint Neural Architecture and Hyperparameter Search. In Garnett, R., Hutter, F., Vanschoren, J., Brazdil, P., Caruana, R., Giraud-Carrier, C., Guyon, I., and Kégl, B. (Eds.), ICML workshop on Automated Machine Learning (AutoML workshop 2018).
- On the Importance of Hyperparameter Optimization for Model-based Reinforcement Learning. In Banerjee, A., and Fukumizu, K. (Eds.), Proceedings of the 24th International Conference on Artificial Intelligence and Statistics (AISTATS’21), pp. 4015–4023. Proceedings of Machine Learning Research.
- Auto-PyTorch Tabular: Multi-Fidelity MetaLearning for Efficient and Robust AutoDL. IEEE Transactions on Pattern Analysis and Machine Intelligence, 43, 3079 – 3090.
- Performance assessment of multiobjective optimizers: An analysis and review. IEEE Transactions on Evolutionary Computation, 7, 117–132.
- Hilde Weerts (6 papers)
- Florian Pfisterer (23 papers)
- Matthias Feurer (19 papers)
- Katharina Eggensperger (18 papers)
- Edward Bergman (8 papers)
- Noor Awad (16 papers)
- Joaquin Vanschoren (68 papers)
- Mykola Pechenizkiy (118 papers)
- Bernd Bischl (136 papers)
- Frank Hutter (177 papers)