Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
119 tokens/sec
GPT-4o
56 tokens/sec
Gemini 2.5 Pro Pro
43 tokens/sec
o3 Pro
6 tokens/sec
GPT-4.1 Pro
47 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From plane crashes to algorithmic harm: applicability of safety engineering frameworks for responsible ML (2210.03535v1)

Published 6 Oct 2022 in cs.HC and cs.LG

Abstract: Inappropriate design and deployment of ML systems leads to negative downstream social and ethical impact -- described here as social and ethical risks -- for users, society and the environment. Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent. We interviewed 30 industry practitioners on their current social and ethical risk management practices, and collected their first reactions on adapting safety engineering frameworks into their practice -- namely, System Theoretic Process Analysis (STPA) and Failure Mode and Effects Analysis (FMEA). Our findings suggest STPA/FMEA can provide appropriate structure toward social and ethical risk assessment and mitigation processes. However, we also find nontrivial challenges in integrating such frameworks in the fast-paced culture of the ML industry. We call on the ML research community to strengthen existing frameworks and assess their efficacy, ensuring that ML systems are safer for all people.

User Edit Pencil Streamline Icon: https://streamlinehq.com
Authors (7)
  1. Shalaleh Rismani (8 papers)
  2. Renee Shelby (12 papers)
  3. Andrew Smart (20 papers)
  4. Edgar Jatho (1 paper)
  5. Joshua Kroll (1 paper)
  6. AJung Moon (8 papers)
  7. Negar Rostamzadeh (38 papers)
Citations (25)

Summary

We haven't generated a summary for this paper yet.