- The paper categorizes risks from 39 articles into clinical data, technical, and socio-ethical domains, presenting a novel framework for AI in healthcare.
- It identifies specific challenges such as patient harm, tool misuse, algorithmic bias, privacy breaches, and accountability gaps while detailing targeted remedies.
- This comprehensive review offers actionable strategies that guide future research towards safe, effective, and equitable AI integration in healthcare.
This literature review analyzes 39 articles published between 2018 and 2023 to identify and categorize the risks associated with implementing AI in healthcare (2309.14530). It proposes a paper framework organizing these risks into three primary genres: clinical data risks, technical risks, and socio-ethical risks, further broken down into seven major categories. The goal is to provide a comprehensive resource for understanding these challenges and developing mitigation strategies.
The framework categorizes AI risks as follows:
- Clinical Data Risks:
- Patient Harm due to AI Errors: AI systems can fail due to noisy input data, dataset shift (where real-world data differs from training data due to population, protocol, or equipment variations), and an inability to adapt to unexpected environmental or contextual changes (e.g., mistaking artifacts for observations). Remedies include standardized evaluation/regulation, designing AI as assistive tools with clinician oversight, and developing dynamic systems capable of continuous learning while retaining human control.
- Technical Risks:
- Misuse of Medical AI Tools: Incorrect use by clinicians or patients can lead to errors. This risk arises from tools designed without sufficient end-user input, leading to complex interactions, lack of AI literacy among users, and the proliferation of easily accessible but potentially unreliable or unvalidated AI health apps. Remedies involve close collaboration with end-users in design, implementing broad AI education programs, and regulatory oversight of consumer-facing AI health tools.
- Risk of Bias in Medical AI and Perpetuation of Inequities: AI can embed and amplify existing healthcare disparities related to factors like gender, age, ethnicity, income, and geography. This stems from biased training data reflecting systemic inequities or human biases, geographic concentration in datasets, and biased data labeling during clinical assessment. Remedies include careful data selection representing diverse populations, interdisciplinary development teams, ensuring AI transparency and explainability, and continuous monitoring for bias post-deployment.
- Privacy and Security Issues: AI deployment increases risks to data privacy and confidentiality. Challenges include ensuring truly informed consent when dealing with opaque algorithms and complex data-sharing agreements, preventing unauthorized data repurposing ('function creep'), and protecting systems and data from security breaches or cyberattacks. Remedies involve increasing awareness of risks, expanding regulatory frameworks for accountability, promoting decentralized approaches like federated learning, and research into enhanced security measures.
- Obstacles to Implementation in Real-World Healthcare: Even validated AI tools face adoption barriers. These include the traditionally slow uptake of technology in healthcare, poor quality, unstructured, or inconsistent real-world health data requiring significant cleaning, and the unknown effects of AI on clinician-patient dynamics, necessitating updated clinical guidelines and care models.
- Socio-ethical Risks:
- Lack of Transparency: Many AI algorithms, especially deep learning models, are 'black boxes,' making it hard to understand how they reach a decision. This lack of transparency comprises two aspects: traceability (documenting the development process, data used, and real-world performance) and explainability (understanding the reasoning behind specific predictions). This opacity hinders trust, adoption, and the ability to identify error sources. Remedies include creating an 'AI passport' detailing model information, developing traceability tools for post-deployment monitoring, involving end-users in selecting explainability methods, and making transparency a regulatory requirement.
- Gaps in AI Accountability: Determining responsibility when AI-related errors cause harm is difficult due to the multiple actors involved (developers, clinicians, institutions, patients), the challenge of pinpointing the error's source (algorithm, data, usage), the lack of legal precedent, and differing governance/ethical standards between medical professionals and AI developers. This ambiguity can hinder adoption if clinicians fear liability or patients lack trust. Remedies involve establishing clear processes for identifying roles in case of harm and creating dedicated regulatory bodies to enforce accountability frameworks for all stakeholders, including AI manufacturers.
The paper concludes that understanding these categorized risks is crucial for the safe, effective, and equitable integration of AI into healthcare. The framework serves as a guide for future research and the development of strategies to mitigate potential harms.