Introduction

Steady advances in artificial intelligence (AI) can lead to the development of intelligent assistants that help clini­cians make better data-driven decisions. AI can also be an important tool in hospital resource allocation, making health care more affordable. AI has the capacity to revolutionize patient care and outcomes, representing a paradigm shift with innovations that promise a better future, but there are also serious reasons for pause. In my role as Director of the Renaissance Computing Institute (RENCI) at UNC-Chapel Hill, I watch the collective efforts of our staff and researchers, and those of researchers around the country, as they try to address some of the world’s most pressing health concerns. Unfortunately, it can be difficult to reach equitable health out­comes. These difficulties stem from various concerns, such as health care access, existing biases in our understanding of diseases, and disparities in clinical trial enrollments that vary by population and/or gender. Putting our pre-existing knowledge into a machine learning algorithm to work toward AI-based health care may amplify these existing disparities and bring about new equity concerns.1

One of the most compelling aspects of AI in health care is its ability to enhance decision-making processes in gen­eral health care settings, with particular medical conditions, and during public health crises, as we have experienced with the COVID-19 pandemic. Particularly during the pandemic and post-pandemic era, AI has powered our ability to bet­ter understand the spread, response efforts, vaccination development and deployment, and health disparities related to viral epidemics, enhancing our ability to plan for future rapid emergency response efforts and ease medical bur­dens. In the NIH Rapid Acceleration of Diagnostics Program (NIH RADx), researchers used a combination of laboratory diagnosis and AI to predict the severity of COVID-related inflammatory diseases in children,2 and multiple groups participated in a challenge (and successfully developed AI algorithms) to enhance our understanding of post-acute sequelae of SARS CoV-2 (PASC; also “long COVID”).3 Vaishya and colleagues describe multiple applications for AI during COVID-19, including its use for early detection and diagnosis, treatment monitoring, contact tracing, case and mortality projections, drug and vaccine development, health care workload reductions, and prevention.4 While many believe AI may be the future of health care, it is already well represented in our present approaches.

AI algorithms rapidly and accurately analyze large amounts of medical data, helping health care providers make informed, timely decisions. In fact, AI has already shown great success in the analysis and interpretation of medical images, yielding impressive results from radiological and ophthalmological data.5,6 In emergency medicine, too, where the speed of care is vital, AI-powered triage can pri­oritize patient needs based on their condition severity, while leaving medical workers free to focus on medical emergen­cies, helping ensure rapid critical case responses. Weisberg and colleagues describe how AI-enabled triage acts as an additional tool that can accelerate diagnosis and help opti­mize emergency workflows, easing the burden on workers with applications in intracranial bleeding, pulmonary con­cerns, and musculoskeletal trauma.7 As a result of such AI use, patients may experience better outcomes and emer­gency departments can optimize limited resources. Further, AI can forecast patient admission rates, identify potential intensive care cases, and even optimize staff scheduling and bed management. By leveraging these AI insights, hospitals can operate more efficiently, reduce wait times, and better tend to patient needs.

If handled appropriately, as these AI-driven innova­tions evolve and mature, we can expect to see even greater improvements in patient outcomes health care accessibility, and the overall well-being of individuals and communities.

AI and the Social Determinants of Health

Currently, conversations about AI in the context of social determinants of health (SDOH) emphasize the potential role AI might play in addressing individual and community health outcomes. Despite initiatives like “Healthy People 2023,” which seek to reduce health disparities, most researchers still struggle to understand and account for all the com­plexities of SDOH, a barrier that AI-enabled analysis may help to overcome.8 During COVID-19, AI aided in disease forecasting, diagnosis, and prognosis, and the development of large language models (LLMs) like GPT-4 (the LLM har­nessed by Chat GPT) expanded these AI capabilities. That said, challenges such as digital literacy and biased algo­rithms created by biased data inputs underscore the need for strategic solutions, with the prime imperative of ensuring globally equitable AI health care benefits.1,8

In AI modeling, SDOH data are crucial, but a lack of data standardization, anonymization concerns, and model inter­pretability (while highly important) create ongoing chal­lenges for the researchers trying to understand AI impacts.8 AI’s role in assessing SDOH impact on health outcomes, shows particular promise in regard to mental health and chronic disease conditions. In fact, Ong and colleagues discussed how AI applications have democratized special­ized care, especially in lower- and medium-income coun­tries, which has current and potential future applications in low-income and rural communities in North Carolina (and beyond). For example, AI-enabled virtual triage and diag­nostic assistance in telehealth make it possible for some individuals with internet access to seek treatment or diag­nosis even if they are not located near a hospital or medical facility. That said, bias, disinformation, and related types of risk may require regulatory frameworks to facilitate respon­sible AI deployment, especially with regard to medical LMMs.8,9

Unfortunately, too, SDOH data collection methods cur­rently lack standardization, which hinders global research progress. Efforts like the AMA Integrated Health Model and the Gravity Project aim to address this concern, but health infrastructure limitations, regulatory barriers, and environ­mental impacts challenge AI implementation.8 Digital inclusion is crucial for the future of equitable AI adoption and use, and this will require deep consideration of digital literacy and inclusivity strategies. But robust regulatory frameworks and privacy-preserving technologies will help mitigate some of the risks and ensure ethical AI deployment globally.8,9

Bias-imposed Limitations

AI’s ability to inform health care decision-making is only as good and accurate as the data used to train the AI models. If the data do not adequately reflect certain com­munities, social situations, or conditions, the AI method will also reflect this and not be as useful for these communities. Unfortunately, despite the promise of AI-driven health tech­nologies, real-world applications already show numerous failures, and the worst impacts and performances have been directed toward communities already experiencing systemic health care disparities.1,10 This includes disparities asso­ciated with an individual’s race, gender, sexual orientation, and nation of origin, among others. Despite acknowledg­ing potential risks AI poses to these groups, however, AI is heavily reliant on data inputs, creating a bias-in, bias-out scenario.

Unfortunately, efforts to mitigate bias in data are not straightforward because “bias” lacks a clear definition, and because of “post-hoc reflection on bias rather than as a deliverable by design…[when] health equity considerations should commence during data collection and curation through post-deployment monitoring”.10

As such, scholars exhibit justifiable caution when explor­ing AI-based health care and the gender and racial bias inherent to training such systems. AI algorithms must be trained on accurate, unbiased datasets, representing diverse patient populations and encompassing various genders, races, and ethnic demographics.10,11 Unfortunately, when data lack diversity or fail to adequately represent a popula­tion, AI-driven decision-making creates or broadens dispari­ties in diagnosis and treatment1,10

To assist, scholars advocate for greater transparency and explainable AI systems, when integrating AI into health care practices.10 This transparency facilitates collaborative scrutiny among researchers, health care providers, and the greater biomedical community, helping identify and address discernable algorithm biases.10,11

While many examples of racial bias in AI-based health care exist, the problems span beyond this. For example, one algorithm used to predict health care needs “consid­ered health expenses and costs as a proxy for healthcare needs, without having first controlled for evident inequali­ties in access to healthcare services”.12 This consideration disproportionately impacts certain underrepresented com­munities but also has potential wider-ranging impacts on patient health care determinations strictly based on income.

Access Limitations

On January 23, 2024, the NIH RADx Data Hub (a proj­ect that RENCI is involved with) hosted a webinar discussing SDOH in the context of the COVID-19 pandemic.13 During the webinar, researchers from the RADx initiative working with underrepresented populations (RADx UP) presented on access challenges to integrating AI into health care. A group from the Johns Hopkins School of Public Health discussed access concerns and bias directed at transgen­der populations during the pandemic. They specifically explained access concerns related to reluctance because of potential stigmatization and judgment in hospital settings, but also general issues of appointment and testing availabil­ity. The University of Nebraska Medical Center presented on health care access issues with migrant families, emphasizing locational and socioeconomic challenges to health care and even school access during the pandemic.13 The themes included the local nature of access issues related to test­ing, health care, and community assistance, which varied by state or community during the pandemic. Such scenar­ios require targeted, community-specific interventions. As such, the webinar’s discussion underscored the importance of addressing these concerns while tailoring AI solutions to overcome barriers and bridge health care access and out­comes.13

Unfortunately, it can be difficult to equitably integrate AI and health care, as there is no one-size-fits-all solution for communities such as those mentioned during the RADx Data Hub webinar. Lack of internet, phone access, stigma­tization, safety concerns, and distrust in the health care system, particularly among these marginalized populations, must be considered as we move forward with AI in health care, or we risk increasing existing disparities.

AI to the Rescue: A Personal Account of Access Improvement

There is hope. In recent research with colleagues in the UNC Schools of Nursing and Computer Science, we show that AI methods have the potential to impact decision-making processes during prehospital cardiac care.14,15 We are studying the use of AI-based systems that can help emergency medical service clinicians, such as EMTs, make informed prehospital decisions with respect to acute coro­nary syndrome. This improved decision-making has the potential to save crucial time for ER staff at hospitals to pre­pare more effectively for a patient’s arrival.14,15

Working with colleagues in cardiology and computer science, we are also developing AI technology to diagnose cardiac amyloidosis from echocardiogram images and elec­tronic health record (EHR) data. Cardiac amyloidosis is a challenging condition often leading to heart failure that traditionally requires expensive cardiac magnetic reso­nance testing for confirmation. While expert cardiologists in tertiary care centers can suspect cardiac amyloidosis from the echocardiogram, this may not be the case in under-resourced situations. An AI-enabled solution, similar to the work we are doing, may assist with early detection by identi­fying potential cases, serving as a screening tool for ordering a cardiac MRI.16

Such AI algorithms that integrate data from multiple modalities (in this case echocardiograms and EHRs) may be important for screening methods in other situations as well.16 The end goals are improved patient outcomes, reduced unnecessary and expensive testing, and reduced cardiac care resource use.

Dreaming of the Future

In the future, we might dream of personalized treat­ments based on genetic data, real-time disease detection, and virtual assistants able to provide instant medical guid­ance regardless of an individual’s location, insurance, or proximity to a health care facility. We can hope for AI-driven decision-making at scale, precision medicine, and efficient and equitable standards of care, creating healthier commu­nities without regard to geographic location, race, ethnicity, or gender identity.

This beautiful dream for the future of health care can be enabled by AI, but to ensure such an equitable future stan­dard of care, we must be cautious about bias and consider and address existing disparities in our data. AI is only as good as the data we feed it, and access concerns must also be considered. We have a long way to go, but prioritizing such concerns today will undoubtedly lead to an AI-based health care future with fewer disparities between popula­tions.

Acknowledgments

A.K. is the Director of RENCI (Renaissance Computing Institute); Research Professor, Computer Science; Lead Faculty, Masters in Applied Data Science, SDSS; Co-Director, Informatics and Data Science, TraCS; Core Faculty Carolina Health Informatics Program Fellow, Sheps Center.