Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
97 tokens/sec
GPT-4o
53 tokens/sec
Gemini 2.5 Pro Pro
44 tokens/sec
o3 Pro
5 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Challenges and Solutions (2107.09546v2)

Published 20 Jul 2021 in cs.LG and cs.CY

Abstract: Machine learning is expected to fuel significant improvements in medical care. To ensure that fundamental principles such as beneficence, respect for human autonomy, prevention of harm, justice, privacy, and transparency are respected, medical machine learning systems must be developed responsibly. Many high-level declarations of ethical principles have been put forth for this purpose, but there is a severe lack of technical guidelines explicating the practical consequences for medical machine learning. Similarly, there is currently considerable uncertainty regarding the exact regulatory requirements placed upon medical machine learning systems. This survey provides an overview of the technical and procedural challenges involved in creating medical machine learning systems responsibly and in conformity with existing regulations, as well as possible solutions to address these challenges. First, a brief review of existing regulations affecting medical machine learning is provided, showing that properties such as safety, robustness, reliability, privacy, security, transparency, explainability, and nondiscrimination are all demanded already by existing law and regulations - albeit, in many cases, to an uncertain degree. Next, the key technical obstacles to achieving these desirable properties are discussed, as well as important techniques to overcome these obstacles in the medical context. We notice that distribution shift, spurious correlations, model underspecification, uncertainty quantification, and data scarcity represent severe challenges in the medical context. Promising solution approaches include the use of large and representative datasets and federated learning as a means to that end, the careful exploitation of domain knowledge, the use of inherently transparent models, comprehensive out-of-distribution model testing and verification, as well as algorithmic impact assessments.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (11)
  1. Eike Petersen (11 papers)
  2. Yannik Potdevin (2 papers)
  3. Esfandiar Mohammadi (10 papers)
  4. Stephan Zidowitz (1 paper)
  5. Sabrina Breyer (1 paper)
  6. Dirk Nowotka (39 papers)
  7. Sandra Henn (1 paper)
  8. Ludwig Pechmann (1 paper)
  9. Martin Leucker (13 papers)
  10. Philipp Rostalski (12 papers)
  11. Christian Herzog (2 papers)
Citations (16)

Summary

We haven't generated a summary for this paper yet.