AI in Healthcare – Regulatory perspectives
As we make new advances in AI technology, regulators may consider multiple approaches to address the safety and impact of AI in healthcare industry, as well as how international standards and other best practices are currently being used to support medical software regulation, as well as differences and gaps that need to be addressed for AI solutions. AI needs to generate real-world clinical evidence throughout its life cycle and has the potential for additional clinical evidence to support adaptive systems.
Over the past ten years, regulatory guidance and international standards for software have emerged where they are included as an independent medical device or physical device. It provided requirements and guidelines for software manufacturers to show that they comply with medical device regulations and to place their products on the market.
However, AI in healthcare introduces a new risk which has not been addressed under current standards portfolio and software guidance. Various approaches are required to ensure the security and performance of AI solutions placed in the market. As these new policies are being defined, the current control landscape for software should be considered a good starting point.
In Europe, there are several general requirements that apply to software such as Medical Device Regulation (MDR) and In vitro diagnostic regulation (IVDR). These include: General responsibilities of manufacturers such as risk management, clinical performance evaluation, quality management, technical documentation, specialized device identification, post market surveillance and corrective measures; Equipment design, environment interaction, analysis and measuring functions, design and manufacturing requirements including active and connected equipment; and Information provided with the device, such as labeling and instructions for use.
In addition, EU regulations have specific requirements for software. These include the need for electronic programmable systems and the prevention of negative interactions between the software and the IT environment.
U.S. In, the FDA recently published a discussion paper on the proposed regulatory framework for amendments to the AI / machine learning-based SaMD. It is based on the practices of current FDA premarket programs, including the 510 (k), de novo, and premarket approval (PMA) routes. It uses the FDA Benefit-Risk Framework, Risk Management Principles in Software Modification Guidelines, and the Total Product Life Cycle (TPLC) approach from the FDA Digital Health Pre-Sert Program, in addition to the risk classification principles from IMDRF.
Elsewhere, other countries have begun developing and publishing regulatory guidelines. In China, the National Medical Products Administration (NMPA) has developed a guideline for assistive decision-making medical device software using in-depth learning methods. Japanese and South Korean regulatory bodies have also published guidelines for AI in health care.