With the rise of artificial intelligence (AI) and machine learning (ML) integration across various industries, it’s no surprise that its application could lead to incredible medical innovation. Many companies have expressed frustration with the slow movement of the FDA when it comes to AI/ML in the medical field. They believe their inefficiency is slowing down healthcare innovation; however, it would be irresponsible if there wasn’t any authoritative oversight on AI’s use, especially when it directly affects patient care and privacy.
In 2019, the U.S. Food and Drug Administration (FDA) provided its first draft of a regulatory framework for AI/ML based software as a medical device (SaMD). Given this is a new environment for the FDA to navigate, they have requested feedback and are continually changing their approach to suit manufacturers’ needs and efficiently approve AI applications while also prioritizing safety.
Part of this approach is having a hybrid centralized and decentralized regulation model. Meaning decentralized regulation would allow less oversight of AI applications where a patient’s physical health isn’t put so much at risk, e.g. paperwork. This type of application wouldn’t require a pre-market evaluation. Centralized regulation where the software has the potential to impact patients’ health negatively would require a lengthy process for approval — its algorithm would be put under a microscope. This presents another challenge: algorithms are known to change and improve over time and use. It gets more difficult to regulate this when it expands from a local health setting to a national scale.
The FDA proposed a total product lifecycle (TPLC) approach that allows regulatory oversight to embrace the improvement power of AI/ML SaMD while assuring that patient safety is maintained. It also ensures that ongoing algorithm changes are implemented according to pre-specified performance objectives, follows defined algorithm change protocols, utilizes a validation process committed to improving the performance, safety, and effectiveness of AI/ML software, and includes real-world performance monitoring.
FDA’s TPLC AI/ML workflow
This summary is a very general overview of the FDA’s framework and gets more detailed as the applications for its use vary; for example, X-Ray feeding tube misplacement. In response to this discussion paper, the FDA then released its 2021 action plan highlighting five main actions:
- Further developing the proposed regulatory framework, including through issuance of draft guidance on a predetermined change control plan (for software’s learning over time)
- Supporting the development of good machine learning practices to evaluate and improve ML algorithms
- Fostering a patient-centered approach, including device transparency to users
- Developing methods to evaluate and improve ML algorithms
- Advancing real-world performance monitoring pilots
Two years later, in April 2023, the FDA released its latest guidance based on feedback from their previous plan, public meetings, workshops, and pre-submissions for devices. Like the previous version, it is subject to change and only serves as a guidance that describes the FDA’s current thinking. It does not establish legally enforceable responsibilities unless specific regulatory or statutory requirements are cited.
While it is frustrating that the adoption of AI/ML could be faster in comparison to the advancement of technology, we have to consistently put human safety first. At the very least, we can appreciate the FDA’s dedication to listening and incorporating feedback, allowing public discussion and general comments to be taken seriously. We’re only entering the first phases of AI/ML SaMD with the FDA — in a decade, things will look very different as the FDA receives more submissions, feedback, and testing.
For more information on how AI Dynamics’ research and previous work supports medical innovation, contact us today.