Artificial Intelligence Risks: Biased Data and Functional Issues
Laura M. Cascella, MA, CPHRM
One of the major red flags associated with artificial intelligence (AI) is the potential for bias. Bias can occur for various reasons. For example, the data used to train AI applications might be biased; research has shown racial, gender, socioeconomic, and age-related disparities in medical studies. Algorithms that rely on data from these studies will reflect that bias, perpetuating the problem and potentially leading to suboptimal recommendations and patient outcomes.1 Likewise, bias can permeate the rules and assumptions used to develop AI algorithms, which “may unfairly privilege one particular group of patients over another.”2
In some cases, bias might occur because of a variance in the training data or environment and how the AI program or tool is applied in real life. A study in BMJ Quality & Safety refers to this as “distributional shift” and notes that this mismatch can occur because of:
- Bias in the data training set (e.g., data represent outlying rather than typical cases)
- Changes in disease patterns over time that are not introduced to the AI system (e.g., data are not updated, so the program continues to rely on the initial data training set)
- Inappropriate application of an AI system to an unanticipated patient context (e.g., a different population than originally intended)3
An example of inappropriate application of AI was described in a Health Data Management article that discussed AI-enabled facial analysis systems used to detect pain and monitor disease. An investigation of algorithmic bias showed that these systems did not perform well when used with older adults who had dementia.4 Although liability implications related to AI are still evolving, a Healthcare IT News article notes that “A clinician relying on a device in a medical setting who doesn't account for varied outcomes for different groups of people might be at risk of a malpractice lawsuit.”5
Another important consideration with AI is that machine learning is literal and results oriented — that is, it relies on the data it receives to run algorithms that generate outputs, whereas humans have the ability to see “bigger picture” influences. As a result, AI systems might be rigid in recognizing and adapting to nuances, changes in context, and idiosyncrasies.
This “insensitivity to impact” can prevent AI from factoring in the consequences of false positives and false negatives. The aforementioned BMJ Quality & Safety article notes that although humans’ ability to err on the side of caution might result in a higher number of false positives and apparent decreases in accuracy, “this behaviour alteration in the face of a potentially serious outcome is critical for safety . . .”6
Other examples of how AI functioning might lead to unintended consequences include:
- Unsafe failure mode. A program or system makes predictions with limited confidence or insufficient information.
- Negative side effects A program or system has narrow functions that are unable to take into account a broader context.
- Reward hacking. A program or system finds ways to meet specified objectives without achieving long-term goals.
- Unsafe exploration. A program or system pushes safety boundaries in an attempt to learn new strategies or methods.
Acknowledgment of issues related to biased data and problems with AI functioning have elevated concerns about the overall safety and reliability of AI technologies. “The rapid pace of change, diversity of different techniques and multiplicity of tuning parameters make it difficult to get a clear picture of how accurate these systems might be in clinical practice or how reproducible they are in different clinical contexts.”8
Thus, amid growing enthusiasm for AI, it is imperative that researchers, AI developers, public health experts, clinicians, and others recognize how AI might reinforce and exacerbate existing problems with bias and generate new dilemmas. Failure to identify these issues and work toward viable solutions will have implications for patient safety and quality of care — and, ultimately, will contradict the proposed benefits of AI.
To learn more about other challenges and risks associated with AI, see MedPro’s article Using Artificial Intelligence in Healthcare: Challenges and Risks.
Endnotes
1 Slabodkin, G. (2019, August 13). AI, machine learning algorithms are susceptible to biased data. Health Data Management. Retrieved from www.healthdatamanagement.com/news/ai-machine-learning-algorithms-are-susceptible-to-biased-data
2 Shroff, A. (2022, April 14). Healthcare AI bias: Reasons and resolutions. Healthcare IT Today. Retrieved from www.healthcareittoday.com/2022/04/14/healthcare-ai-bias-reasons-and-resolutions/
3 Challen, R., Denny, J., Pitt, M., Gompels, L., Edwards, T., & Tsaneva-Atanasova, K. (2019, March). Artificial intelligence, bias and clinical safety. BMJ Quality & Safety, 28(3), 231-237. doi: 10.1136/bmjqs-2018-008370
4 Slabodkin, G. (2019, July 27). AI presents host of ethical challenges for healthcare. Health Data Management. Retrieved from www.healthdatamanagement.com/news/ai-presents-host-of-ethical-challenges-for-healthcare
5 Jercich, K. (2021, October 29). Machine learning can revolutionize healthcare, but it also carries legal risks. Healthcare IT News. Retrieve from www.healthcareitnews.com/news/machine-learning-can-revolutionize-healthcare-it-also-carries-legal-risks
6 Ibid.
7 Challen, et al., Artificial intelligence, bias and clinical safety.
8 Ibid.