Skip to Main Content

The Food and Drug Administration on Thursday released a new plan allowing developers of medical devices that rely on artificial intelligence to automatically update products already being used in the clinic.

The agency’s draft guidance outlines a new process in which the makers of AI tools could get approval for modifications in advance by submitting a document describing how the changes will be implemented and tested.

advertisement

“If what you’re going to do looks right, we can bless the plan,” Jeffrey Shuren, head of the FDA’s Center for Devices and Radiological Health, told STAT. “Then you can move forward, make those changes as long as you follow the plan and you don’t have to come back to the FDA.”

The document, known as a “pre-determined change control plan,” would eliminate the current need to seek the agency’s approval for every significant product update. The FDA described its approach as “the least burdensome” way to safely allow companies to modify products that use machine learning, a type of AI in which a computer learns to perform tasks without explicit programming.

“One of the greatest potential benefits of [machine learning] resides in the ability to improve ML model performance through iterative modifications, including by learning from real-world data,” the agency wrote in its draft guidance.

advertisement

The draft is nonbinding and represents the agency’s first attempt to deal with an especially thorny dilemma presented by modern AI systems — that their core functioning can change as they adapt to circumstances in the real world.

While that might be fine — and desirable — for AI products designed to recommend movies to watch or shoes to buy, it becomes problematic in medicine, where errors introduced during product updates can cause deadly malfunctions. AI is now used in a wide array of medical products designed to detect and monitor life-threatening conditions, and deliver the right treatments to seriously ill patients. It is used to quickly detect strokes, flag lesions suspicious for cancer, and warn of conditions such as a sepsis, a life-threatening complication of infection.

As they get used in clinics, the AI products themselves are not the only things likely to change. The thinking and behavior of clinicians may also evolve through the use of these tools, altering relationships and decision-making in ways that can be difficult to track.  

“I don’t think anybody has any experience implementing a regulatory or accreditation regime around that sort of dynamic system,” said Philip Payne, director of the Institute for Informatics at Washington University in St. Louis. “There is just no benchmark.”

He said the FDA’s change-control framework brings the right engineering process to bear on clinical AI tools, but he also worries about whether the documentation burden will delay the modification process. “I love the change control design pattern,” he said. “I worry about the speed and agility of its implementation in a regulatory construct.”

Bradley Merrill Thompson, a lawyer who has been critical of the FDA’s recent guidance surrounding software devices, wrote in emailed comments that the new guidance will “encourage innovation and the timely delivery of new medical technologies.”

He said it will significantly increase the burden on medical AI developers to plan ahead, but he noted many are already doing so because of recent federal legislation that gave the FDA authority to require submission of change-control documents. “Going forward, for developers of machine learning driven medical devices, it will be crazy not to include a (change-control) section from here on,” Thompson wrote. “All machine learning driven devices will need to evolve over time, so it would be wasteful and irresponsible to not try to anticipate what those changes might be and plan for them accordingly in an initial FDA submission.”

The FDA is seeking public comments on its guidance as it looks to formally change its approach to reviewing AI devices, a process that could take years. The draft breaks down the process for seeking advance approval of product modifications in explicit detail, laying out the steps an AI developer must take to modify products after their introduction into medical settings.

The makers of the products would have to clearly explain their changes, specify plans for testing the performance of the altered product, and describe how they will communicate modifications to users. Once finalized, the policy would allow developers to get permission ahead of time for  modifications when they seek the agency’s initial approval for their products.

Lizzy Lawrence provided additional reporting.

This story is part of a series examining the use of artificial intelligence in health care and practices for exchanging and analyzing patient data. It is supported with funding from the Gordon and Betty Moore Foundation.

STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect

To submit a correction request, please visit our Contact Us page.