While AI offers tremendous benefits, it also poses risks. Some of these risks are already materializing into harms, broadly referred to as “AI incidents”. With the ongoing deployment of AI across various sectors, an uptick in AI incidents is expected.
Artificial intelligence (AI) is transforming the healthcare sector, offering new possibilities for diagnosis, treatment, prevention and research. AI can help improve the quality, accessibility and affordability of health services, as well as empower patients and health professionals. However, AI also brings new challenges and risks, such as potential bias, discrimination, privacy breaches, safety issues and ethical dilemmas. How can we ensure that AI in healthcare is trustworthy, responsible and beneficial for all?
One way to address these challenges is to monitor and report AI incidents, which are events where AI systems cause or contribute to negative outcomes, such as diagnostic errors, breaches of patient privacy, or biases in treatment recommendations. The nature of these incidents can vary widely, from physical harm to patients to more intangible harms like psychological stress or privacy violations. AI incidents can help us learn from mistakes, identify gaps and weaknesses, and improve the design, development and deployment of AI systems. AI incidents can also raise awareness and inform policy makers, regulators, legislators and the public about the opportunities and risks of AI in healthcare.
The Case for a Healthcare-Specific AI Incident Monitor
Given the sensitive nature of healthcare, the consequences of AI incidents can be particularly severe. Hence, there’s a pressing need for a dedicated framework to monitor, report, and learn from these incidents. This framework should:
- Capture a Broad Spectrum of Incidents: Including everything from minor errors with limited impact to major failures with widespread consequences.
- Facilitate Learning and Improvement: By systematically analyzing incidents, healthcare providers can improve AI systems, making them safer and more effective.
- Ensure Transparency and Accountability: Public reporting of AI incidents can build trust among patients and practitioners in AI-based healthcare systems.
Drawing Inspiration from OECD’s Approach
The OECD, in collaboration with the European Commission and other partners, has launched a global AI Incidents Monitor (AIM) 1, which tracks and analyses AI incidents in real time and provides a “reality check” to make sure that the reporting framework and definition function in practice. The AIM is part of the OECD.AI Policy Observatory 2, which is a platform to share and shape public policies for trustworthy AI, based on the OECD Principles on Artificial Intelligence 3.
The OECD’s initiative to develop a global AI incidents monitor (AIM) provides a valuable model. This monitor aims to collect data on AI incidents in real-time, offering insights into the types of AI applications that might require regulatory attention and the root causes of AI failures. Such a system in healthcare could similarly identify patterns, inform policies, and prevent recurrence of incidents.
Key Components of an AI Incident Monitor in Healthcare
- Clear Definition of AI Incidents: Defining what constitutes an AI incident in healthcare is crucial. Establishing clear definitions and taxonomies for AI incidents to ensure consistency and interoperability across different healthcare systems and jurisdictions.
- Assessing Severity and Impact: Classifying incidents based on their severity, scope, and scale – whether they impact an individual, organization, or society.
- Real -Time Data Collection and Analysis: Establishing a mechanism to collect detailed information about incidents, including their causes and consequences, as they occur.
- Global Consistency and Interoperability: A common framework for incident reporting would facilitate learning from global experiences and align international efforts.
- Policy Development and Regulation: Insights from the incident monitor should inform policy-making, helping to develop regulations that enhance the safety and efficacy of AI in healthcare.
- Prevention and Learning: Analyzing incidents to understand underlying causes and prevent future occurrences.
- Stakeholder Engagement: Involving a broad range of stakeholders, including healthcare professionals, AI developers, patients, and policymakers, is essential for a comprehensive understanding of AI incidents.
How an AI Incident Monitor Could Work in Healthcare
An AI incident monitor for healthcare could work in a number of ways. One way would be to collect data on AI incidents from a variety of sources, such as news reports, medical journals, and patient complaints. The data could then be analyzed to identify trends and patterns.
Another way would be to create a system for reporting AI incidents. This system could be used by healthcare providers, patients, and others to report incidents. The reports could then be investigated and analyzed to identify the causes of incidents and to develop recommendations for prevention.
There are a number of other resources that can provide information about AI incidents in healthcare. These resources include:
- The ECRI Institute’s Patient Safety Event Database
- The FDA’s Adverse Event Reporting System
- The World Health Organization’s Patient Safety Incident Reporting System
Challenges and Opportunities in Healthcare AI Incident Monitoring
Challenges:
- Data Privacy and Confidentiality: Balancing the need for thorough incident monitoring with the protection of patient data.
- Diverse Healthcare Ecosystems: Adapting the monitoring system to work across various healthcare technologies and organizational structures.
- Resource Allocation: Ensuring adequate resources for the development and maintenance of the monitoring system.
Opportunities:
- Enhanced Patient Safety: By identifying and rectifying AI-related errors promptly.
- Building Trust: Transparent reporting of incidents can strengthen trust in AI-driven healthcare systems.
- Informing Ethical AI Use: Insights from incident monitoring can guide the ethical development and deployment of AI in healthcare.
The necessity for a comprehensive AI incident monitor in healthcare is evident, mirroring the efforts of the OECD. Such a system would play a crucial role in ensuring the safe, ethical, and effective use of AI in healthcare, addressing risks proactively and fostering an environment of continuous improvement and accountability. As AI becomes more entrenched in healthcare systems, the need for vigilant monitoring of its impacts and incidents becomes increasingly critical.
0 Comments