Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
102 tokens/sec
GPT-4o
59 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
50 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

Concrete Safety for ML Problems: System Safety for ML Development and Assessment (2302.02972v1)

Published 6 Feb 2023 in cs.LG, cs.CY, cs.SE, cs.SY, and eess.SY

Abstract: Many stakeholders struggle to make reliances on ML-driven systems due to the risk of harm these systems may cause. Concerns of trustworthiness, unintended social harms, and unacceptable social and ethical violations undermine the promise of ML advancements. Moreover, such risks in complex ML-driven systems present a special challenge as they are often difficult to foresee, arising over periods of time, across populations, and at scale. These risks often arise not from poor ML development decisions or low performance directly but rather emerge through the interactions amongst ML development choices, the context of model use, environmental factors, and the effects of a model on its target. Systems safety engineering is an established discipline with a proven track record of identifying and managing risks even in high-complexity sociotechnical systems. In this work, we apply a state-of-the-art systems safety approach to concrete applications of ML with notable social and ethical risks to demonstrate a systematic means for meeting the assurance requirements needed to argue for safe and trustworthy ML in sociotechnical systems.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (5)
  1. Edgar W. Jatho (1 paper)
  2. Logan O. Mailloux (4 papers)
  3. Eugene D. Williams (2 papers)
  4. Patrick McClure (11 papers)
  5. Joshua A. Kroll (4 papers)
Citations (1)
X Twitter Logo Streamline Icon: https://streamlinehq.com