As new AI tools are developed in radiology, the medical community is exploring ways these advances could unintentionally incorporate biases. Experts in imaging AI considered this as they gathered for the 2022 ACR Imaging Informatics Summit to discuss bias in AI. Moderated by Tessa S. Cook, MD, PhD, vice chair for practice transformation at the University of Pennsylvania, experts addressed sources of bias in AI models and their potential for perpetuating health disparities without thoughtful AI implementation.
“We know there are multiple potential sources of bias in AI models, and we know those biases have the potential to amplify health disparities,” Cook explained. “For us, as clinical practitioners and imaging physicians, it is important to be aware, as we incorporate tools in our workflow, that they could have unanticipated or unintended consequences for our patients.”
Defining Bias
Addressing bias in AI requires awareness of multiple, co-existing definitions that frame conversations. “Bias” commonly refers to “cognitive” bias, or a human filtering process to simplify information processing, which can result in prejudice toward an idea, person or group. Further categorization yields differences between “explicit” and “implicit” bias, distinguishing intentional from unintentional prejudice.
Within the context of AI, the definition of bias can be further narrowed to specify:
- Algorithmic bias – a tendency for AI models to reflect human biases present within the training data.
- Statistical bias – systematic errors, or reproducible differences between true and expected values. As Cook explained, “statistical bias occurs when there are inherent errors in the model that make it not capture or represent reality.”
- Prediction bias – differences between the average of model prediction versus the average of labeled data.
- Social bias – the potential for decisions based on AI results to adversely affect underrepresented groups and exacerbate healthcare inequities.
Discussions regarding bias in AI often introduce the concept of the “bias-variance tradeoff,” referring to prediction bias. Variance refers to variability in prediction of a given data point, reflecting the range of data. High-bias, low-variance contexts lend to “underfitting,” where an oversimplified model fails to effectively capture the patterns in the training data. In contrast, high-variance, low-bias contexts lend to “overfitting,” where complex models become oversensitive to noise in the training data and fail to generalize.
Sources of Bias
Statistical bias in AI may arise from any stage along the AI development pipeline, generally categorized as stemming from:
- Data handling. Bias may derive from imbalanced data sets not representative of actual patient demographics, varying levels of data annotator expertise, lack of standard annotating guidelines, heterogenous image/scanner quality, and failure to identify data leakage, such as overlapping training and testing data.
- Model development. Bias during model development manifests from inattention to the bias-variance tradeoff, predisposing to over- or under-fitting training data sets.
- Performance evaluation. Selection of inappropriate metrics for performance evaluation enables misrepresentation of model performance.
We know there are multiple potential sources of bias in AI models, and we know those biases have the potential to amplify health disparities.
Sources of AI bias are sometimes difficult to identify prior to model deployment and are therefore often identified retrospectively. Mandating transparency in reporting details about model development and critical attention to review of published results become important as a result. Failure to scrutinize models may lead to deployment of biased AI models that perpetuate inherent discrepancies, ultimately harming patients. Examples include those trained on patient populations whose characteristics differ from the patients on whom the models are being used (e.g., adult models applied to pediatric patients, models predominantly trained on patients of one race and applied to patients of another) or models whose outputs can potentially harm patients in subgroups that were insufficiently represented in the training data.
Using Bias to Mitigate Bias
Acknowledging the potential for unrecognized bias to adversely affect patients, intentional design accounting for bias presents an opportunity to mitigate bias in AI development. Directly mitigating bias by leveraging “equitable” bias involves oversampling data from underserved populations and/or groups shown to be negatively affected by social bias from previous AI use. Alternatively, “biased” AI tools can be selectively used in specific patient populations well-represented in the model’s training data set.
Practical Steps to Address AI Bias
Paul Yi, MD, director of the University of Maryland Medical Intelligent Imaging Center, summarized additional practical areas where it is possible to mitigate AI bias:
- Data level. Report demographics, balance demographics and ensure accurate labels across subgroups.
- Model level. Leverage novel techniques, such as ensemble learning, which generates and combines multiple models to solve problems, and vision transformer models, which treat input images as a series of patches and outperform convolutional neural networks in computational efficiency and accuracy.
- Clinical deployment. Incorporate real-time surveillance to monitor for unexpected model performance that might be secondary to underlying biases.
Radiology professionals are increasingly recognizing the risks posed by data and model drift, as real-time performance of AI tools often deteriorates over time due to changes in patient demographics, imaging protocols, imaging hardware and evolving prevalence of disease, among other factors. Active surveillance leveraging business intelligence solutions can allow early intervention to prevent premature clinical decisions based on erroneous AI results.
Ultimately, accelerating clinical deployment of AI tools can lead to increased risks of exposing patients to biases that perpetuate healthcare disparities. However, a deliberate, thoughtful approach to mitigate the potential downstream adverse effects on patient care from unaddressed sources of bias can establish best practices to guide ethical, effective use of imaging AI in practice.