Will AI ‘Rehumanize’ Healthcare?
Insights from AAMI Standards Leaders
Business View Magazine reports industry insights from the Association for the Advancement of Medical Instrumentation (AAMI)
By Brian Stallard, with additional writing and reporting by Martha Vockley
“I got into healthcare to take care of people, but these days, I spend all of my time taking care of machines.” This is a sentiment Pat Baird, senior regulatory specialist at Philips, has often heard from hospital nurses and other clinicians from all over the U.S. With the advent of electronic medical records, big data, and an explosion in wireless hospital technologies, the number of medical devices clinicians are expected to interact with has only increased since these complaints began.
It’s a perspective that struck a chord for Baird, who serves as cochair for a new Artificial Intelligence (AI) Committee dedicated to standardizing how AI and machine learning (ML) are implemented and regulated for medical devices. The committee was put together as part of a joint effort between the Association for the Advancement of Medical Instrumentation (AAMI) and the British Standards Institution (BSI).
“Do you remember the song ‘Mr. Roboto’ by Styx, back in early 80s?” Baird recently asked a room of healthcare technology management professionals at the annual AAMI eXchange conference. As the song goes, ‘the problem’s plain to see, too much technology. Machines to save our lives, machines dehumanize.’
“I thought that was kind of awesome – very apt for the problem at hand,” said Baird, who has also had leadership roles on AI-related projects with the World Health Organization, the International Organization for Standardization, and the International Electrotechnical Commission.
So, what’s the solution to this problem? It may take more technology, not less, to finally overcome this tech-driven drought in patient bedside interactions. “I actually think that artificial intelligence and machine learning are going to help us re-humanize healthcare,” Baird’s fellows cochair, Jesse Ehrenfeld, said when the AI Committee was first launched.
An anesthesiologist, senior associate dean at the Medical College of Wisconsin, and President-elect for the American Medical Association, Ehrenfeld understands the importance of interacting with his patients. “AI is making it so caregivers have more time to interact with patients – to give care,” he explained. “To me, that’s the payoff of this technology.”
AI, the Partner
According to Erin Sparnon, senior engineering manager for device evaluation at ECRI and member of the AAMI Standards Board, there are two primary reasons why facilities are currently buying clinical healthcare AI.
“Case number one is to free up humans from doing routine tasks that don’t need a human. So, we pick on the imaging field: putting your radiology images up, turning them to the left, making sure that they’re just the right size—that sort of task,” she explained. “Use case number two is pulling findings and alerts and notifications out of huge pools of data that it wouldn’t be practical for a human to try to do.”
This frees up hospital staff time and other resources to be spent where they matter most, with the patients. However, that’s not the only application of AI that is slowly introducing itself into the healthcare environment. The global medical device market has also seen an influx of machine learning algorithms designed to enhance the decision-making of clinicians. Some of the most common examples are devices that harness image recognition (driven by machine learning) to assists in the detection of illness. The program known as GI Genius, for instance, became the first AI-based device to be granted premarket authorization by the U.S. FDA for detecting colon cancer.
There are also now programs that autonomously flag signs of diabetic retinopathy during scans of a patient’s retina. “And healthcare professionals who are not trained in ophthalmology at all carry this scan out,” added Sparnon. “That’s a big deal!”
In this way, the technology is not replacing the human element, but instead is enhancing the capabilities of humans so they better interact with their patients. “The way I see it, AI won’t replace physicians, but physicians who use AI will replace those who don’t,” Ehrenfeld said. “It’s increasingly likely that AI is going to be an essential tool that helps augment our practice, ensures higher-quality care, and brings additional value to the healthcare system.”
AI, the Ineffable
What complicates an otherwise bright future is the seemingly unknowable nature of an AI’s machine learning algorithm. “One of the interesting aspects of this technology is that it can continue to learn and improve itself over time,” Baird explained. “A system might be initially trained on a fixed set of images, but as it is used in multiple hospitals over a period of time, more training data can be collected and fed back into the learning system. Just as we gain experience over time, so can this software.”
This poses a unique challenge for regulators, who cannot reasonably predict how a machine learning algorithm might evolve in a unique healthcare environment. How do you ensure the safety and efficacy of a program that can change itself after premarket approval?
“FDA has made significant strides in developing policies that are appropriately tailored for software as a medical device (SaMD),” the U.S. FDA wrote in a 2019 discussion paper inviting industry comment. However, “the traditional paradigm of medical device regulation was not designed for adaptive AI/ML technologies… The highly iterative, autonomous, and adaptive nature of these tools requires a new, total product lifecycle regulatory approach.”
In 2021, with the launch of the FDA’s AI/ML-based SaMD Action Plan, the federal agency announced that it “would expect a commitment from manufacturers on transparency and real-world performance monitoring for [AI/ML]-based software as a medical device, as well as periodic updates to the FDA on what changes were implemented as part of the approved pre-specifications and the algorithm change protocol.”
Fortunately, Baird believes this won’t be as difficult as it sounds if the industry can first standardize how AI is designed for the healthcare environment. And this starts with risk management.
AI, the Familiar
In 2022, a small task force from the AAMI AI Committee published a new industry Consensus Report, AAMI CR34971:2022, which serves as a guideline for the application of ISO 14971 to artificial and machine learning. The document was reviewed by the full AI Committee and by risk analysis experts at the British Standards Institute before its launch.
While just an indistinguishable string of letters and numbers for people unfamiliar with medical device management, ISO 14971 is the title of go-to guidance for the risk management of medical devices around the world. “The process is the same no matter what you’re doing. You need to plan, set risk-acceptability criteria, and identify and evaluate different risks,” Baird explained. “The only difference is that ML is going to fail in slightly different ways than how software typically fails.”
Prior to publishing their report, the task force conducted a literature review of AI/ML failures in multiple other industries in an attempt to collect valuable lessons learned. “One of the things we noticed in the literature about ML systems is that many times failures occurred because, although the development team had data, they didn’t have knowledge,” Baird said. “Developers had logical assumptions regarding the use of their product, but the reality was different, leading to failure.”
Sparnon provided an example for healthcare technology professionals during the recent AAMI eXchange conference. “Imagine a system looking at lung x-rays is tasked to identify the sickest patients.”
As the algorithm is fed more information, it learns that x-rays from patients getting imaged up on the upper floors are often the sickest, because they are too sick to go down to radiology. From that logic alone, completing the task then boils down to efficiently “picking on where the image was taken,” said Sparnon. “It’s not actually assessing the patient’s lung at all. All the data scientist sees is ‘hey, look, this system is great at finding out who’s too sick.’”
There’s no one with the right contextual knowledge to realize the program has failed to grasp the true purpose of the task. “To be successful, we really need to understand the context of use, and leverage the wisdom around us. We felt it was important to stress this point when discussing risk management,” Baird concluded.
As a result, the CR details safety-related characteristics and considerations for data management, bias, security and privacy, adaptive systems, and even troubles that may arise from too much trust in AI.
The CR also includes annexes covering the risk management process, risk management examples, considerations for autonomous systems, and personnel qualifications. AAMI and BSI next plan to use this CR as the basis for an AAMI technical report and a British Standard. Longer term, AAMI and BSI expect to propose these resources to the International Organization for Standardization as guidance, informative, or annex documents to ISO 14971 and its sister document, ISO 24971.
“I’ll be the first to admit that AI has been promising us great things for decades and has consistently failed to meet people’s expectations,” said Baird, “but I think this time we will succeed.”
Parties interested in joining the AAMI AI Committee can email Standards@aami.org for more information.
Click The Cover To View Or Download The Brochure
AT A GLANCE
Association for the Advancement of Medical Instrumentation (AAMI)
What: A non-profit organization for advancing the development, and safe and effective use of medical technology
Where: Based in Arlington, Virginia