Jojy Cheriyan MD; PhD; MPH; MPhil

The promise of technology to improve human lives is undeniable across various industries and has played a crucial role in reducing disparities in several service delivery sectors. However, in healthcare where disparities and inequities are widely debated, we are seeing more reports and studies being published about worsening disparities due to digital technologies, and most recently Artificial Intelligence (AI) based tools, when it comes to patient outcomes and quality of care. These revelations are happening during increasing adoption of digital technologies and AI. The unintentional consequences of poor planning, development, deployment, and maintenance of these technologies are exacerbating existing inequalities, leading to a paradox where the very tools designed to uplift could instead deepen disparities.
The number of medical devices with AI technology has risen sharply in the past decades. As of December 15, 2024, approximately 950 AI-enabled devices have been approved by the U.S. Food and Drug Administration (FDA, 2024). The first AI-enabled device was authorized by the FDA in 1995, and since then the number has spiked, with six AI devices authorized in 2015 and 221 devices in 2023. The first AI-enabled device authorized by the FDA in 1995 was the PapNet Testing System, which used neural networks to assist in the analysis of Pap smear slides for cervical cancer screening. It was designed to help pathologists identify abnormal cells more efficiently, marking a significant early step in the integration of AI into medical diagnostics.
Bridging and Perpetuating Health Disparities
Digital technologies can be transformative in addressing disparities, particularly in accessing resources and services. While AI has been utilized in medicine for decades, the emergence of Large Language Models (LLMs) like ChatGPT and other open-source tools has democratized access, bringing AI to a broader audience and sparking widespread debate.
Since last decade, telemedicine has revolutionized healthcare delivery, enabling patients in remote areas to connect with specialist healthcare providers, thereby reducing geographical barriers. Similarly, online educational platforms have democratized access to knowledge, granting individuals from underserved communities the opportunities to learn and grow in ways that were previously inaccessible.
Furthermore, data-driven insights are helping policymakers identify and address disparities in social determinants of health, education, and employment. By leveraging big data analytics, organizations can tailor interventions to meet the unique needs of different populations, thereby promoting more equitable outcomes.
Despite these promising benefits, the design and deployment of digital technologies often unintentionally exacerbate existing disparities. Issues such as poor quality or incomplete data, digital illiteracy, lack of access to high-speed internet, and biases embedded in algorithms can prevent certain populations from accessing the benefits of technological advancements.
The digital divide remains stark, with marginalized communities often lacking reliable internet access or the skills needed to navigate online platforms. According to a 2021 Pew Research Center report, approximately 7% of U.S. adults still lack internet access, with the rates significantly higher among those with lower incomes, rural residents, and individuals from racial and ethnic minority groups.
In healthcare the algorithms are more prone to biases, particularly when trained on datasets that do not adequately represent diverse populations or clinically relevant groups.
A 2019 study published in the Science journal highlighted a significant racial bias in a widely used health risk prediction algorithm. This algorithm, deployed across healthcare systems in the U.S., prioritized high-risk care management services for white patients over black patients, even when black patients had worse underlying health conditions. The bias stemmed from the algorithm’s reliance on healthcare costs as a proxy for health needs, inadvertently perpetuating systemic inequities in healthcare spending. In 2022, The Lancet Digital Health published studies emphasizing the risk of bias in healthcare algorithms, particularly highlighting the dangers posed by underrepresentation in training datasets. For instance, one article discusses how algorithmic performance often varies significantly across different demographic groups, such as underrepresented skin tones in dermatology datasets.
Machine learning models trained on skewed datasets can produce outcomes that are disproportionately harmful to underrepresented groups or clinically defined populations, which may not always be based solely on race.
Medical scientists and physicians have been outspoken about these issues, frequently raising concerns in medical journals and conferences. Articles in journals such as Nature Medicine (2022) and presentations at conferences like the American Medical Informatics Association (AMIA) Annual Symposium emphasized the need for transparency, robust dataset diversity, and fairness metrics in AI model development. For instance, a 2022 commentary in JAMA called for mandatory reporting of demographic data in training datasets to mitigate bias and improve equity in AI-enabled healthcare tools.
Another notable example is found in facial recognition technology, which has shown higher error rates for people with darker skin tones, raising concerns about discriminatory practices in law enforcement and hiring. As a result, rather than leveling the playing field, digital technologies risk entrenching existing inequalities through biased data and algorithms.
The Role of a Competent Leadership is Warranted:
The key to harnessing the potential of digital technologies while mitigating their risks lies in careful planning and development processes. Technologies must be designed inclusively, prioritizing user-centered approaches that account for the diverse needs of various communities and medically defined populations. Furthermore, stakeholders—including technologists, community members, and policymakers—should collaborate to ensure that digital tools are deployed in equitable ways.
For instance, the development of digital health interventions should involve input from the populations they aim to serve, ensuring that tools are clinically and culturally relevant and accessible. This participatory approach fosters trust and promotes the effective use of technology within diverse communities.
Public-private partnerships can also facilitate the broader rollout of essential services, like high-speed internet, high speed smart phones and digital literacy programs, to bridge the digital divide. Above all, transparency is key in building trust and ensuring equitable access to digital resources. Open communication between stakeholders, clear goals and objectives, and robust accountability mechanisms are crucial for fostering a truly inclusive digital future.
Conclusion:
Digital technologies offer an incredible opportunity to enhance outcomes and reduce disparities across various healthcare domains. However, without careful consideration and intentional planning, these same technologies risk exacerbating the very inequalities they strive to overcome. In a 2022 commentary published in JAMA, experts argued for mandatory reporting of demographic data in AI training datasets to mitigate bias and improve equity in healthcare tools. Additionally, conferences like the American Medical Informatics Association (AMIA) Annual Symposium have been platforms for these discussions, with attendees advocating for stronger oversight and diversity in healthcare AI applications. Furthermore, Health Affairs (2012) highlights that uncoordinated investments in technology often result in redundant systems, leading to wasteful spending rather than the promised cost reductions or improvements in care quality. A key challenge is that many technological investments lack integration across different systems, often resulting in duplicated efforts and escalating costs. This issue can be attributed to poorly planned technology implementations and a lack of coordinated strategy in healthcare settings.
It is also essential for stakeholders at all levels to engage in conversations about the ethical implications of technology and to actively work towards inclusive solutions that ensure equitable access and outcomes. This multifaceted approach is crucial in harnessing the transformative power of digital technologies while safeguarding against their propensity to perpetuate generational divides. By acknowledging both the potential benefits and risks of digital/AI technologies, we can pave the way for a future where technology serves as a true equalizing force in society.
References:
- Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 2018 ACM Conference on Fairness, Accountability, and Transparency.
- Dillon, D. R., & Hujer, M. (2016). Transforming Education through Digital Technology. Educational Technology, 56(1), 16-20.
- Dorsey, E. R., & Topol, E. J. (2016). State of Telehealth. New England Journal of Medicine, 375(2), 154-161.
- Kellermann, A. L., & Jones, S. S. (2013). What It Will Take to Achieve the Oncologist Workforce We Need. Health Affairs, 32(2), 281-286.
- Mackey, T. K., & Liang, H. (2013). The Role of Big Data in Addressing Social Disparities in Health. Journal of Public Health Policy, 34(1), 25-45.
- Pew Research Center. (2018). Digital Divide Persists Even as Lower-Income Americans Make Gains in Tech Adoption. Retrieved from [Pew Research Center].
- Van Dijk, J. (2017). The Digital Divide. Policy Press.
- Warschauer, M. (2004). Technology and Social Inclusion: Rethinking the Digital Divide. MIT Press.
- David Wen, Saad M Khan, Antonio Ji Xu, Hussein Ibrahim, Luke Smith, Jose Caballero, Luis Zepeda, Carlos de Blas Perez, Alastair K Denniston, Xiaoxuan Liu, Rubeta N Matin, Characteristics of publicly available skin cancer image datasets: a systematic review, The Lancet Digital Health, Volume 4, Issue 1,2022, Pages e64-e74
- Vartan, S. (2024, February 20). Racial bias found in a major health care risk algorithm. Scientific American. https://www.scientificamerican.com/article/racial-bias-found-in-a-major-health-care-risk-algorithm/
- Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations. Science 366,447-453(2019). DOI: https://doi.org/10.1126/science.aax2342
- Ratwani RM, Sutton K, Galarraga JE. Addressing AI Algorithmic Bias in Health Care. JAMA. 2024;332(13):1051–1052. doi:10.1001/jama.2024.13486
- Lavin, A., Gilligan-Lee, C.M., Visnjic, A. et al. Technology readiness levels for machine learning systems. Nat Commun 13, 6039 (2022). https://doi.org/10.1038/s41467-022-33128-9
- Federation of American Scientists. (2024, September 26). Improving Health Equity Through AI. https://fas.org/publication/improving-health-equity-through-ai/
- Ethics, Bias, and Transparency for People and Machines | Data Science at NIH. (n.d.). https://datascience.nih.gov/artificial-intelligence/initiatives/ethics-bias-and-transparency-for-people-and-machines
- Reducing Waste in Health Care. (2012). https://doi.org/10.1377/hpb20121213.959735
The content provided on this blog, “Medical and Healthcare Insights,” is for informational purposes only and is not intended as a substitute for professional medical advice, diagnosis, or treatment. The views and opinions expressed in the blog posts are those of the author and do not necessarily reflect the official policy or position of any healthcare institution, organization, or employer. Readers are encouraged to consult with qualified healthcare professionals for any health-related questions or concerns. The author and the blog are not responsible for any errors or omissions, or for any outcomes related to the use of this information. Use of this blog and its content is at your own risk.

Leave a comment