Introduction

After a decade of working in medi­cal image processing as a professor of radiology at the University of North Carolina School of Medicine, Stephen Aylward, PhD, founded the North Carolina office of the open-source soft­ware company Kitware. The company provides artificial intelligence (AI) and other advanced technical computing services to a variety of companies and conducts National Institutes of Health (NIH), Defense Advanced Research Projects Agency (DARPA), and US Department of Defense (DoD) grant-funded research on medical imaging, scientific computing, and computer vision.

With its open-source philosophy, Kitware makes its software available to small and large businesses as well as researchers to use in developing new products and services in health care and other industries. In health care, interest in machine learning—the use and development of computer systems that can, with the help of algorithms and other sta­tistical models, learn by inference instead of explicit instruc­tion—is growing swiftly. However, many practitioners and policymakers remain unfamiliar with its potential benefits and biases.

Aylward currently leads a collaborative software project focused on machine learning technologies aimed at innovat­ing in medical imaging research and product development. He spoke with guest editor Sean Sylvia, PhD, health eco­nomics professor at UNC, about the evolving role of these technologies in these areas of health care and policy.

“There is such momentum behind this effort, that it is going to come to you,” Aylward said. “Educate yourself so that when it does come, when you do start getting AI ven­dors in front of you, showing you different options, you can interact with them in an informed way.”

This interview has been condensed and edited for clarity.

Sean Sylvia (NCMJ): You are actively integrating AI with existing medical imaging technology. What seems to work well, and what are the challenges with that?

Stephen Aylward (Kitware): We learned early on that the software has to come to the clinician. The open-source MONAI medical open network for AI (https://monai.io), for example, allows someone with limited computer experience to set up a server that is running an AI method. The AI method could be one that they created or one that they’ve chosen from a catalog of pre-trained models. These servers enable the AI methods to be seamlessly integrated with clinical workflows and IT systems in hospitals. We tried to make that as easy as possible, to help AI researchers and product developers quickly bring their meth­ods to clinicians.

We see this wonderful shift in the industry where software and hardware vendors are now working more closely together because it is in their mutual best interest. For example, ultrasound hardware manufacturers have realized that software can be used to distinguish their ultrasound probe from everyone else’s ultra­sound probe. For a small investment, you can connect an ultra­sound probe to your phone or tablet and have it run AI algorithms that processes the ultrasound data as it’s being acquired.

NCMJ: You are working a lot on point-of-care ultra­sound applications. What is the impact that you hope to have there?

Aylward: Ultrasound is a low-cost imaging device, and the earlier that you can do a diagnosis in the field, perhaps using ultrasound, the better the outcomes for the patients. However, ultrasound images are difficult to interpret. So, point-of-care ultrasound is a wonderful opportunity for artificial intelligence. Our goal is to help military medics, EMS units, and emergency personnel in general use an ultrasound device without ever having to interpret an ultrasound image. These systems can be used to detect a pneumothorax (collapsed lung) or detect inter-abdominal bleeding as part of a FAST (Focused Assessment with Sonography in Trauma) exam, with AI guidance. We want AI to help them make the quality assessments that are necessary for in-field patient triage.

NCMJ: With algorithms that you’re applying to image datasets, do you worry a lot about data drift like we would with other types of data? If, for example, you have a chang­ing patient population that you’re trying to adjust for?

Aylward: Once you have trained an AI system, you lock it down, and it is set for a specific population. If you want to account for drift in your patient population, you are going to have to develop a “new” algorithm, from the point of view of the FDA. Ideally, you have trained your AI system to work for a broad patient population. That’s why having collabora­tors at multiple institutions is so important when developing machine learning products.

One of the rapidly evolv­ing technologies of AI, that I think we’re going to see more and more of is called feder­ated learning. It involves sending the AI method to the data, as opposed to the other way around. What this means is that you no longer have to transfer data from multiple hospitals to a cen­tral location to train your AI system. Instead, you train your algo­rithm by systematically sending it to your collaborating sites, to learn from each site’s data. This is done in an iterative process, to build up an AI system that works well across multiple cen­ters and for a diverse patient population. Confidential patient data doesn’t have to be shared. Federated learning systems can avoid many of the critical and sometimes insurmountable chal­lenges associated with multi-center data transfer agreements and diverse patient accrual.

Once federated learning systems have become common­place, we will then see federated evaluation systems. Instead of federated systems only being used for AI training, you will also be able to send your trained AI algorithm to multiple collaborat­ing hospitals for verification. Federated evaluation will tell you how well your algorithm performs on a diverse patient popula­tion, perhaps as part of an FDA application or an AI certification process.

NCMJ: What are some unique concerns about data bias for health care with this application of artificial intelligence?

Aylward: Avoiding bias and accounting for drift require access to the right training data. Before AI, algorithms that processed medical images had to be hand-coded, for example, to delineate the boundaries of the organs. This meant that the algorithm designers had direct control over the algorithm’s biases. With modern machine-learning methods such as deep learning, algorithms are more complex, more obscure, and more driven by the data. We are learning that data from any one hos­pital will contain a multitude of biases, arising from the patient population, the equipment, the technicians, and the physicians.

Say you have hundreds of examples of common diseases in your data, but only a few examples of a rare disease. If you employ a naive AI training technique to learn from that data, the rare disease might be completely lost by the AI. That rare disease may ultimately have a zero percent chance of being diagnosed by the AI system.

Biases may be known or unknown, and they may be mitigated or accentuated by the AI system. Researchers are work­ing hard to overcome these challenges. Similar biases exist in popular AI chat systems and in other computer vision applica­tions, but these biases are particularly critical when applying AI to medical imaging. The FDA is very aware of this problem. They are strongly encourag­ing multi-center clinical trials for AI applications, where the centers that participate in the algorithm’s evaluation are not the centers that participated in the algorithm’s develop­ment. The good news is that we’ve made wonderful prog­ress in addressing these issues. Concerns remain, and many naive implementations are still being reported in publications; however, many outstanding researchers and the FDA are mak­ing progress.

NCMJ: What do you say to people who hear all of this and think it sounds great, but they still struggle with elec­tronic health records being properly integrated into their health care work?

Aylward: There is phenomenal momentum behind the AI effort. It is going to come to you, but you don’t have to force it at this point in time. It will be thoroughly tested and smoothly inte­grated into clinical workflows, and it will become invaluable. However, you don’t have to be the first person using it. Many have the feeling that they must adopt AI right now, but if you aren’t comfortable with it, or if you don’t want to upgrade your systems right now, you’re going to be fine. Don’t panic. Let other people suffer the cutting edge. We still have a couple of bumps in the road to get past.

For policymakers: it’s coming, and you have to start prepar­ing for it now. Consider developing a plan on how you want to integrate AI and what safeguards you want to set up. Look at federated learning and educate yourself so that when it does come, when you are being inundated by AI vendors showing you a multitude of different options, you can interact with them in an informed way.

NCMJ: Speaking of “don’t panic”—there is a workforce question here as well. Is there concern about jobs being replaced by software like what you’re describing?

Aylward: I used to be at UNC, 30 years ago, and it used to be that when you signed a waiver for surgery, one of the sen­tences began, “I acknowledge that medicine is an art as well as a science.” I think that there is very much an aspect of art to medicine that is not easily captured by AI. To put this in concrete terms: you might diagnose a broken bone on an X-ray, however, how to deal with that broken bone in an 80-year-old woman versus a 13-year-old boy is vastly different. Right now, because of concerns regarding bias, the practical use of AI is typically limited to only looking at images or certain components of the EHR. AI is a tool that, when it becomes broadly available, you should make use of and not be afraid of. It is not going to replace jobs, but it is going to make you better at your job and able to focus less on the mundane.

Where you’re going to see it first, by the way, is with patient reports and insurance. That’s where the most impact is going to happen in the near term, and who isn’t going to welcome not having to deal with identifying the proper insurance code or tak­ing appropriate notes during a surgery or patient visit? Those are things that AI is going to be good at, but the art of medicine is here to stay for a very long time.

NCMJ: How do you develop a responsible AI strategy if you’re a health system? What are the key elements that you think are important to think about as systems are developing these strategies for evaluating new AI that they’re looking to adopt?

Aylward: As the policymaker, you have to educate yourself regarding what are the risks and benefits that are offered by various AI systems and vendors. You then have to look at your system and look at where the pain points exist.

Follow a traditional business approach. AI doesn’t require anything new or different. You know where your costs and bottlenecks exist, and AI, again, is just a tool. It doesn’t require anything magical.