Papers
Topics
Authors
Recent
2000 character limit reached

Trustworthy AI: A Computational Perspective

Published 12 Jul 2021 in cs.AI | (2107.06641v3)

Abstract: In the past few decades, AI technology has experienced swift developments, changing everyone's daily life and profoundly altering the course of human society. The intention of developing AI is to benefit humans, by reducing human labor, bringing everyday convenience to human lives, and promoting social good. However, recent research and AI applications show that AI can cause unintentional harm to humans, such as making unreliable decisions in safety-critical scenarios or undermining fairness by inadvertently discriminating against one group. Thus, trustworthy AI has attracted immense attention recently, which requires careful consideration to avoid the adverse effects that AI may bring to humans, so that humans can fully trust and live in harmony with AI technologies. Recent years have witnessed a tremendous amount of research on trustworthy AI. In this survey, we present a comprehensive survey of trustworthy AI from a computational perspective, to help readers understand the latest technologies for achieving trustworthy AI. Trustworthy AI is a large and complex area, involving various dimensions. In this work, we focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being. For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems. We also discuss the accordant and conflicting interactions among different dimensions and discuss potential aspects for trustworthy AI to investigate in the future.

Citations (165)

Summary

  • The paper surveys advancements in trustworthy AI from a computational perspective, analyzing six critical dimensions: safety, fairness, explainability, privacy, accountability, and environmental sustainability.
  • It details computational strategies for achieving trustworthiness in each dimension, such as adversarial training for safety, adversarial learning for fairness, and differential privacy for privacy.
  • The authors discuss interactions between these trustworthiness dimensions and suggest future research pathways, including human agency, oversight, and ensuring credible AI outputs.

Trustworthy AI: A Computational Perspective

The paper "Trustworthy AI: A Computational Perspective" provides a comprehensive survey of recent advancements aimed at cultivating trustworthy AI systems, dissecting the complexity embedded in the field from a computational perspective. As AI technology continues to weave itself into the fabric of human life, ensuring its reliable and ethical implementation remains a focal challenge. The authors have structured the discourse around six critical dimensions that benchmark AI trustworthiness; safety, fairness, explainability, privacy, accountability, and environmental sustainability. Each dimension is meticulously explored to foreground the diverse avenues through which trustworthiness can be attained and sustained.

Key Dimensions:

1. Safety and Robustness: The paper highlights the vulnerability of AI systems to adversarial attacks, especially in safety-critical applications like autonomous vehicles and healthcare. It presents various strategies to bolster resilience, such as adversarial training and certified defenses, which ensure models can withstand perturbations while maintaining functional integrity.

2. Fairness and Nondiscrimination: AI systems often mirror societal biases through training data or algorithmic design, leading to unfair treatment across demographic lines. The paper categorizes biases deriving from data, algorithm, or evaluation methodology, and reviews mitigation strategies like adversarial learning and regularization, emphasizing the importance of balancing performance and ethical fairness.

3. Explainability: The drive for transparency in AI processing is a compelling need articulated within the context of interpretability and its applications in critical domains such as medicine. The authors analyze methods like model-intrinsic and model-agnostic explanations, underscoring the utility of techniques like LIME and Grad-CAM in demystifying AI operations without compromising performance.

4. Privacy: Addressing privacy concerns in AI is crucial, given that data breaches can severely undermine personal and organizational security. The paper surveys various attack vectors such as model inversion and membership inference, proposing countermeasures like differential privacy, federated learning, and confidential computing to safeguard sensitive information.

5. Accountability and Auditability: As AI systems operate as black-box models with complex operational dynamics, determining accountability becomes challenging. The paper explores methodologies like algorithmic audits to establish a framework for accountability, emphasizing the need for internal and external evaluations to align AI outputs with ethical usage.

6. Environmental Sustainability: The paper draws attention to the environmental toll exacted by AI, especially through extensive energy consumption in model training. Techniques like model compression and adaptive design are discussed as viable methods to enhance energy efficiency, thereby enabling AI development with reduced ecological impact.

Interactions and Future Directions:

Understanding the interactions amongst these dimensions is vital for holistic AI development. The authors highlight both synergistic and antagonistic interactions, such as how robustness may complement explainability but conflict with privacy. Moreover, additional dimensions such as human agency and oversight, and the need for credible AI outputs are suggested as future research pathways that require attention to achieve comprehensive AI trustworthiness.

The paper serves as a critical guide for researchers exploring the multifaceted landscape of AI ethics and reliability, offering insightful discussions on cutting-edge methods while charting paths for potential advancements. In its thorough exposition of trustworthy AI, the paper underscores a universal message that aligning AI technology with ethical principles is not merely desirable but indispensable in fortifying public trust and ensuring AI's beneficial influence on society.

Paper to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this paper yet.

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Collections

Sign up for free to add this paper to one or more collections.