Risk Management Tools & Resources

 


The Essential Role of Governance in Ensuring the Safety and Quality of Artificial Intelligence in Healthcare

The Essential Role of Governance in Ensuring the Safety and Quality of Artificial Intelligence in Healthcare

Laura M. Cascella, MA, CPHRM

Without doubt, artificial intelligence (AI) is on the cusp of revolutionizing fundamental aspects of society. The excitement and promise of AI have propelled these technologies into the global limelight and piqued the interest of leaders and stakeholders in many industries, including healthcare. As AI continues to proliferate, many healthcare organizations are looking for opportunities to incorporate AI applications in meaningful ways that will benefit patients, the workforce, the organization, and communities.

Yet, because of AI's rapid advancement, the ability of experts and regulators to establish safety standards and best practices has not kept pace. As a result, many organizations might venture into AI without fully considering its implications or putting proper precautions in place, which may lead to negative and dangerous outcomes — particularly when technologies interface directly with patient care.

Excitement vs. Reality

The Center for Connected Medicine's (CCM's) most recent Top of Mind for Top Health Systems survey found that almost 80 percent of healthcare executives believe that AI is the most exciting emerging technology in healthcare.1 Yet, in another survey of healthcare executives, CCM found that only 16 percent of respondents said their organizations had governance policies related to AI usage and data access.2

While much work remains from an overarching perspective (e.g., federal regulations and oversight, professional standards and best practices, research, etc.), healthcare organizations also need to take steps to ensure responsible use of AI. At the core of this responsibility is developing AI governance policies and procedures to reinforce an organizational commitment to beneficence, nonmaleficence, and justice in the deployment of AI.

An important first step in this process is establishing a panel or committee to facilitate AI governance. This group should consist of appropriate and diverse representatives, such as AI developers and experts, data scientists and analytics experts, clinicians, individuals with legal and ethical expertise, information technology staff, risk managers, and patient representatives.

The function and oversight of the AI governance committee will largely depend on the organization's current use of the technology and future plans for adopting AI applications. Broad areas of focus that the committee should consider include the following:

  • Staying current on evolving laws, regulations, and standards related to AI to ensure compliance and utilization of best practices. Because AI changes rapidly, healthcare organizations should continuously monitor for developments that might affect their use of the technology.
  • Ensuring that adequate and appropriate personnel, resources, and technology infrastructure are in place to support AI applications throughout their lifecycles.
  • Developing requirements for using AI technologies that have transparent algorithms and understandable outputs vs. systems using black-box reasoning. Requirements should take into account issues of trustworthiness, autonomy, and overall safety.
  • Performing due diligence of AI applications, including verifying the quality and validity of the technology, ensuring that applications are built on data that reflect the patient population for which the technology will be used, and confirming that applications do not introduce bias or perpetuate health disparities.
  • Verifying AI applications' privacy and security features to prevent avoidable breaches and safeguard patients' protected health information and other sensitive data.
  • Developing guidelines to help manage ethical challenges that might arise with the use of AI, including guidance on fair process and allocation of resources.
  • Determining how best to deploy AI applications within the healthcare setting, including considerations related to workflow processes, authorization, communication, change management, and more.
  • Devising ethical standards related to disclosure and informed consent, such as telling patients when AI is involved in their care, for what purposes, and potential benefits vs. risks; making patients aware of their right to refuse AI as part of their care; and securing patient consent to use their health information for AI purposes.
  • Developing educational and communication strategies to ensure a properly trained and well-informed workforce. Like any technology, AI will involve a learning curve, and some applications will be more complex than others. Clinicians do not need to be AI experts, but they should have a basic understanding of how these programs and tools function and their purpose so they can effectively educate patients.
  • Establishing procedures for ongoing monitoring and evaluation of AI applications, including a system to capture safety and quality issues and guidance for how to document those issues. Monitoring should include looking for issues associated with "data drift, input–output variation, unexpected outcomes, data reidentification risk, and clinical practice impacts."3

AI holds much promise for improving healthcare in a multitude of ways — from reducing clinician burnout, to tackling staffing challenges, to improving diagnosis and treatment. However, as the familiar adage states, with great power comes great responsibility. For healthcare systems, a crucial step in taking responsibility is establishing an AI governance committee and implementing governance policies. Doing so can help focus the organization’s AI strategy, confirm due diligence of AI programs and vendors, and ensure that AI efforts are always considered in the context of safety, quality, equity, and value.

For more information about AI governance, see the following helpful resources:

Endnotes


1 Center for Connected Medicine & KLAS Research. (2023). Top of Mind for Top Health Systems 2024: AI vaults to the top of the agenda. Retrieved from https://connectedmed.com/resources/ai-dominating-focus-of-health-system-leaders-with-rise-of-generative-ai-and-other-tools/

2 Center for Connected Medicine & KLAS Research. (2024). How health systems are navigating the complexities of AI. Retrieved from https://connectedmed.com/resources/ai-dominating-focus-of-health-system-leaders-with-rise-of-generative-ai-and-other-tools/

3 Reddy, S., Allan, S., Coghlan, S., & Cooper, P. (2020). A governance model for the application of AI in health care. Journal of the American Medical Informatics Association, 27(3), 491–497. doi: https://doi.org/10.1093/jamia/ocz192; Omale, G. (2019, July 12). The need for AI governance in healthcare. Gartner. Retrieved from www.gartner.com/smarterwithgartner/the-need-for-ai-governance-in-healthcare; Gattadahalli, S. (2020, November 3). Ten steps to ethics-based governance of AI in health care. STAT News. Retrieved from www.statnews.com/2020/11/03/artificial-intelligence-health-care-ten-steps-to-ethics-based-governance/; Cunha, R. (2023, April 16). AI governance for healthcare: A comprehensive framework. LinkedIn. Retrieved from www.linkedin.com/pulse/ai-governance-healthcare-comprehensive-framework-renato-cunha/

MedPro Twitter

 

View more on Twitter