The Potential for Bias in Machine Learning and Opportunities for Health Insurers to Address It

Stephanie S. Gervasi
Irene Y. Chen
Aaron Smith-McLallen
David Sontag
Ziad Obermeyer
Michael Vennera
Ravi Chawla
Peer-Reviewed Article
February 2022


Health insurers can implement strategies to address bias in machine learning and predictive modeling used in care management to help reduce health inequities and racial disparities.


Insurers and health plans often use machine learning and predictive modeling to identify patients with complex health needs for interventions. Researchers have previously identified how algorithms and computational tools commonly used by insurers may contribute to health inequities. This analysis identifies how health insurers’ use of machine learning can create opportunities for bias and outlines strategies to help health insurers address bias and increase fairness.


Insurers commonly use predictive modeling to prioritize care management, such as predicting the likelihood for chronic disease, hospitalization, or medication adherence. Predictive models can include systemic bias since disparities in access and use may lead to some subpopulations being underrepresented. Health insurers can audit their predictive models for bias, through approaches such as representational fairness, counterfactual reasoning, and error rate balance and error analysis. As an industry, insurers can focus on identifying and remediating algorithmic bias, obtaining and ethically using race and ethnicity data, addressing bad data, and engaging all relevant voices.


Health insurers must recognize the possibility of biases in machine learning and implement strategies to detect and remediate bias.

Posted to The Playbook on
Level of Evidence
Expert Opinion
What does this mean?