A global technological revolution driven by artificial intelligence (AI) is unfolding before our eyes, a revolu­tion further accelerated by the recent public debut of AI platforms incorporating large language models. Although the ultimate effects remain to be seen, there’s little doubt that AI will bring substantial changes to a myriad of differ­ent sciences and industries, including health care. Indeed, AI is already being applied to a spectrum of health tools spanning diagnosis, treatment, and the management of care delivery. AI applications are also being incorporated into basic, clinical, and translational research and used in patient-provider communication and medical education.

However, as the pace of AI development and uptake accelerates, these technologies—many of them relatively untested—present challenges for the national health care enterprise. Academic medical centers (AMCs) are strug­gling to keep up with the breakneck pace of change and navigate an ever-increasing resource mismatch between academia and industry. At the same time, AI tools have the potential to enhance the success of our core missions while mitigating this resource mismatch. AMCs have an unprec­edented opportunity to forge thoughtful partnerships and assume a leadership role in the responsible implementa­tion of these new, powerful tools. However, this will require AMCs to reimagine and adapt their approaches to their core missions of care delivery, research, education, and community partnerships to achieve new goals while ensur­ing the safety and well-being of patients. It is thus impera­tive for AMCs to adopt a strategic approach that identifies priority areas for growth and development and enables expansion by building on existing strengths. In this article, we share our experience embarking on this journey at the Duke University School of Medicine and Duke Health.

Ensuring Trustworthy Health AI in Clinical Care and Research

Academia initially played a leading role in the develop­ment of AI, starting with its emergence as a field of study in the 1950s.1 More recently, however, the center of grav­ity in AI development has shifted and now clearly resides with industry. A recent report underscores this gap, with industry releasing 51 “notable” machine learning models over the past 20 years compared with 15 for academia and 21 for academic-industry collaborations.2

However, while industry can invest vast resources and expertise in new AI technologies, academia has a critical role to play in ensuring that what is being developed is trustworthy and has a greater societal benefit as its ulti­mate goal. Many AI tools have substantial potential, but the complexity of their inner workings may mask flaws and biases that only manifest with “real-world” use.3–5 In health AI, enthusiasm for new technologies and “fear of missing out” have often converged to create a “Wild West” environment. In such an atmosphere, the impetus to rap­idly adopt new technologies may replace careful and cau­tious evaluation. If AI has the potential to enhance clinical decision-making, then as with any other new diagnostic or therapeutic technology, an evidence-based approach to validating and monitoring its application to clinical care is warranted. For this reason, there is a pressing need for multidisciplinary expertise that can provide a thorough, impartial assessment of AI technologies throughout their lifecycles.

At Duke, we have invested in programs and initiatives aimed at ensuring that AI tools developed or adopted for use in our health care environments are subjected to strin­gent, methodical scrutiny not just of the tool’s technical attributes, but also for issues of ethics, equity, and fair­ness that may arise from its use. The mission of Duke AI Health, one of our first such programs, evolved from fos­tering the development of AI tools to focusing on enabling ethical and equitable data science across the entire health care and research enterprise. A key initiative arising from this focus was Duke’s Algorithm-Based Clinical Decision Support (ABCDS) Oversight. Focused on people, pro­cess, and technology, it grew as a partnership between our clinical and research missions in order to ensure that any algorithm deployed at Duke for patient care or clinical research was carefully evaluated for safety, effectiveness, and equity before its release, and its performance moni­tored continuously throughout its lifecycle.6

More recently, Duke has broadened its efforts as a co-founder of the national Coalition for Health AI (CHAI) initiative, which engages in large-scale efforts to ensure trustworthy health AI for patient benefit, including a recently published Assurance Standards Guide and report­ing checklist for developing and deploying AI in health care.7 Duke and other AMCs have also engaged with indus­try partners such as Microsoft through the Trustworthy & Responsible AI Network (TRAIN), which seeks to research and develop tools that facilitate the application and moni­toring of AI solutions, with patient safety and benefit as the top priority.8

Education and Workforce Development

Success in efforts to build trustworthy health AI depends on a skilled and knowledgeable workforce capa­ble of using AI tools responsibly in appropriate contexts— and just as importantly, understanding their limitations.9 An AI-literate workforce is essential for implementing AI-based health technologies in ways that ensure benefit and value for patients, families, health care providers, and society. This needs to be accomplished on different levels depending on competencies required for different roles. For example, frontline clinicians who use ambient voice transcription technologies should be trained in their use, with sufficient emphasis on learning about undesirable or unintended consequences. Furthermore, the proliferation of chatbots—computer programs that simulate human conversation—will require clinicians to be ready for new kinds of questions and concerns from patients. On a larger scale, as AI tools are integrated into the daily tasks of medical care, we will need to reimagine our approach to medical education.10 Providers need to be informed of the tools being used and know how to actively monitor and engage with these tools in an ethical manner.

These needs cannot be fulfilled solely by industry, which tends to prioritize the creation of products for the market. However, they do dovetail closely with the core missions of AMCs and their universities. There is an oppor­tunity for mutually beneficial partnerships with industry in which AMCs provide critical context and education that informs the development of these tools as well as gen­erating scholarly evidence that informs responsible and safe implementation. Multiple programs are underway at Duke to build capacity for an AI-literate workforce, includ­ing the AI Health Fellowship Program, an intensive two-year training program in health data science that embeds learners with backgrounds in quantitative sciences within multidisciplinary teams that include clinical and statisti­cal experts. More broadly, the AI Health Seminar Series provides access to experts in AI, machine learning, and related fields through lectures and workshops, many of which are free and open to the public.

Most recently, Duke AI Health, in conjunction with the Duke School of Nursing, helped serve as an incubator for the Fostering AI/ML Research for Health Equity and Learning Transformation Hub (FAIR HEALTH) program, which provides a central resource for equipping nurses with the key competencies needed for using AI-based tools while ensuring safe and equitable care for patients.

Informing Policy

AMCs can make substantive contributions to trustwor­thy health AI in the domain of policy. The Duke Margolis Institute for Health Policy has been engaged in multiple partnerships and crosscutting projects aimed at informing regulatory challenges and solutions related to the imple­mentation of health AI and the data underpinning its use. In addition, the Duke Clinical Research Institute has devel­oped a portfolio of projects with a specific focus on leverag­ing AI applications to support transformation of the clinical research enterprise. This includes a center devoted to re-engineering clinical trials to take advantage of AI capa­bilities, digital health technologies, novel data sources, and emerging methods for conducting research that can inform practice and improve access to equitable health care.

Patient and Community Engagement

Finally, we recognize that health AI cannot progress unless patients, families, and communities are engaged as equal partners. AI tools must be implemented and reported on transparently to engender widespread trust. A critical element of such trust is ensuring that the patient voice is part of development and deployment of health AI.

The world of AI is filled with enthusiasm, often genuine, about the potential benefits these technologies offer. However, uncritical enthusiasm can lead to biased hype. Trusted, impartial partners are needed to bridge the gap between the AI community and a public that may har­bor reasonable skepticism about the usefulness, safety, and fairness of the products it generates. We believe that AMCs can be a crucial bridge between health AI innova­tors and their intended beneficiaries: patients, clinicians, and communities.

Conclusion

AI applications are already bringing transforma­tive change to health care delivery and clinical research. However, in order for these technologies to realize their potential for benefit, clinicians, health systems, payers, patients, and the public must have confidence that they are being developed and used in ways that are transpar­ent, equitable, and beneficial. AMCs are poised to play a central role in ensuring the trustworthiness of algorithmic technologies in health care.


Acknowledgments

The authors thank Jonathan McCall, MS, for editorial assistance in the preparation of this manuscript.

Dr. Pencina is a co-founder and board member of the Coalition for Health AI (CHAI). He also reports serving as a consultant for RevelAi and has received funding from the Moore Foundation for assessment of health AI maturity. Dr. Klotman has no disclosures related to this work to report.