Physicians making life-and-death decisions about organ transplants, cancer treatments or heart surgeries typically don't give much thought to how artificial intelligence might help them. And that's how researchers at Carnegie Mellon University say clinical AI tools should be designed — so doctors don't need to think about them.
A surgeon might never feel the need to ask an AI for advice, much less allow it to make a clinical decision for them, said John Zimmerman, the Tang Family Professor of Artificial Intelligence and Human-Computer Interaction in CMU's Human-Computer Interaction Institute (HCII). But an AI might guide decisions if it were embedded in the decision-making routines already used by the clinical team, providing AI-generated predictions and evaluations as part of the overall mix of information.
Zimmerman and his colleagues call this approach "Unremarkable AI."
"The idea is that AI should be unremarkable in the sense that you don't have to think about it and it doesn't get in the way," Zimmerman said. "Electricity is completely unremarkable until you don't have it."
Qian Yang, a Ph.D. student in the HCII, will address how the Unremarkable AI approach guided the design of a clinical decision support tool (DST) at CHI 2019, the Association for Computing Machinery's Conference on Human Factors in Computing Systems, May 4–9 in Glasgow, Scotland.
Yang, along with Zimmerman and Aaron Steinfeld, associate research professor in the HCII and the Robotics Institute, are working with biomedical researchers at Cornell University and CMU's Language Technologies Institute on a DST to help physicians evaluate heart patients for treatment with a ventricular assist device (VAD). This implantable pump aids diseased hearts in patients who can't receive heart transplants, but many recipients die shortly after the implant. The DST under development uses machine learning methods to analyze thousands of cases and calculate a probability of whether an individual might benefit.
DSTs have been developed to help diagnose or plan treatment for a number of medical conditions and surgical procedures, but most fail to make the transition from lab to clinical practice and fall into disuse.
"They all assume you know you need help," Zimmerman said. They often face resistance from physicians, many of whom don't think they need help, or see the DST as technology designed to replace them.
Yang used the Unremarkable AI principles to design how the clinical team would interact with the DST for VADs. These teams include mid-level clinicians, such as nurse practitioners, social workers and VAD coordinators, who routinely use computers; and surgeons and cardiologists, who value their colleagues' advice over computational support.
The natural time to incorporate the DST's prognostications is during multidisciplinary patient evaluation meetings, Yang said. Though physicians make the ultimate decision about when or if to implant a VAD, the entire team is often present at these meetings and computers are being used.
Her design automatically incorporates the DST prognostications into the slides prepared for each patient. In most cases, the DST information won't be significant, Steinfeld suggested, but for certain patients, or at certain critical points for each patient, the DST might provide information that demands attention.
Though the DST itself is still under development, the researchers tested this interaction design at three hospitals that perform VAD surgery, with DST-enhanced slides presented for simulated patients.
"The mid-levels — the support staff — loved this," Yang said, because it enhanced their input and helped them be more active in the discussion. Physician reaction was less enthusiastic, reflecting skepticism about DSTs and the conviction that it was impossible to totally evaluate the interaction without a fully functioning system and real patients.
But Yang said physicians didn't display the same defensiveness and feelings about being replaced by technology typically associated with DSTs. They also acknowledged that the DST might inform their decisions.
"Prior systems were all about telling you what to do," Zimmerman said. "We're not replacing human judgment. We're trying to give humans inhuman abilities."
"And to do that we need to maintain the human decision-making process," Steinfeld added.
The National Heart, Lung and Blood Institute and the CMU Center for Machine Learning and Health supported this research.