HealthManagement, Volume 23 - Issue 2, 2023

img PRINT OPTIMISED

Stephen Gilbert, Professor for Medical Device Regulatory Science at the Else Kröner Fresenius Center for Digital Health at Dresden University of Technology, spoke to HealthManagement about the need for improved and balanced regulation of medical devices and artificial intelligence, and what can be expected in the future.

 

What is your stand on the need for better regulation of AI and machine learning medical tools?

 

It is, like always, a question of balance between old habits and new approaches.

 

A lot of the regulation that exists at the moment was not designed with AI or even modern software development practices in mind, and in our case, it was developed around implant technologies primarily. With the amount of time dedicated to designing a new hip implant and the critical importance of the materials and the mechanical functioning an implant, there had to be a very tight control of every component that goes into it and for every revision of that device.

 

There’s a question of finding that balance with new technologies which are infinitely more adaptive by their nature.

 

You could take an extremist perspective and some very traditionally minded people within the regulatory space do that. They may say there needs to be a very, very tight control in the versioning and consequently the update cycle should be incredibly slow.

 

We’re talking years, as exemplified by a hip implant, because patients are involved and oversight takes time; and there is a need for extreme caution.

 

However, in my mind there needs to be balanced caution. The way you should approach software is actually to allow feedback loops and to listen to and react to data about how your product is used and how it is performing; to record and measure that performance as close to real time as possible, and then to allow the adoption of that software as close to real time as possible because software can be adapted very quickly.

 

If you’re operating in a system where you have no feedback on the performance, on what basis are you modifying software? How do you know that it is functioning correctly? The extension of that from software to AI is the same argument.

It is not a question of not having a balance. It is not a question of saying we need to monitor exactly as we do with a hip implant and a hip replacement joint. It is not about finding a middle ground exactly. Instead, it is about finding a new ground, new approaches, new methods and new ways of monitoring.

 

Underregulation leads to: “insufficient oversight, irresponsible manufacturers paying insufficient attention to safety, and patient harms” (Gilbert et al. 2023) However more careful regulation can limit the availability of essential or lifesaving materials or products. How can this imbalance be avoided?

 

The feedback loop is about a device which is on the market. It could be an AI device or software device and how you feedback from the doctors and the patients or how that’s performing. It all comes down to the design of the device.

 

It is similar to the regulatory system, whereby there are feedback loops, but they don’t function very well, and function very slowly. There is a need for them to be more reactive and less antagonistic in many ways.

 

Maybe it is the case that the regulator always feels attacked and they respond to that by asserting their authority.

 

There needs to be a better listening approach from regulators. However at the moment, under frameworks for assessing regulation, regulators and legislators think that they only need to produce a report that shows that the regulation is working well.

 

In the U.S. they have several of the leading universities linked within a program with the FDA. So the likes of Stanford and Harvard and others are carrying out data led research that really is starting to make a difference in the U.S. They are responsible for feeding data in, for studying, and actually identifying the new problems or new technologies regarding it. They are exploring the innovations that are about to come in the pipeline.

 

There are the innovations that scientists are delivering that the regulatory system can’t yet cope with. Consider new types of cells in personalised cell therapies, new types of AI like large language models, all of which could deliver a potential benefit to patients.

 

How should that be regulated? It is not a question of how it should be stopped or how it should be enabled, but it is about how it should be addressed in a holistic manner, considering the aspects of patient safety, and considering the potential for patient benefit. So it is a question of how to pay closer attention to this. Now, this is harder within Europe because Europe has many member states and a very complicated governing structure, whereas a country such as the UK, Switzerland or Japan that is not subject to union oversight has the advantages of being a single system.

 

But it does not mean oversight cannot be done in a European setting. It is a question of having the will to do it and setting up systems to do it.

 

They say that medical malpractice and medical harm is huge. Will the fear/risk of malpractice be addressed, i.e., if AI ‘helps’ in medical decision making?

 

The overall question for health care systems is building an oversight approach for AI and other health software. It’s not only a question of AI, but it is about ensuring the systems are not blocking progress or blocking advances towards rational and sensible approaches for oversight.

 

There are a few really interesting proposals in the U.S. side of regulation that are not yet fully in force for introducing oversight mechanisms or reactive oversight mechanisms within larger hospital groups – this is not for the individual hospitals - but for the payer systems (larger ones) where there’s an impact assessment before new technology is introduced. It is our fundamental responsibility to assess AI when it’s introduced and even its interaction with other areas.

 

Regarding medical malpractice, there is a very large potential for decision support algorithms to reduce malpractices by enhancing the decision-making capabilities of clinicians and healthcare institutions.

 

Unfortunately, malpractice can go in a number of different directions. In certain types of health care systems, concerns about malpractice liability increases interventions and it can be an incentive to over-treat. There can be a financial incentive through profit, but it can also be an accidental incentivization through bad design of the pricing system. Medical malpractice can occur if a doctor makes an error because of substandard treatment, as a result of their fatigue and burnout. AI does not get fatigued, and costs a lot less and may be very valuable to augment clinician decision making, if correctly introduced.

 

Malpractice fear, as always, is the thought of have I done enough? The support algorithms have the potential to assist, if developed and implemented correctly, in providing a degree of back up to the doctors. Within a European setting, all of these would be classified as medical device areas. If they’re providing decision support, they’re much more tightly regulated.

 

However, there is certainly potential if AI is correctly applied in this area, to be considering patient needs, the health care system needs, and the doctor needs.

 

AI should not be considered as a single tool, but holistically across the range of tools that are introduced to enhance the workflows within hospitals, in order for there to be real benefits in this area.

 

Can you explain how the FDA approach to the introduction of AI-based DHT’s regulation is different to that of the European Union?

 

There are two critical aspects which are different: the first includes oversight over medical devices which are introduced into the U.S. or which may not even arrive depending on the political progress, and the second aspect is this question of monitoring change. The latter comes under the relatively technical title of change control plans or periodic change control plans. In the UK, they’re approaching this as an approach for general medical software, not only AI-based software.

 

We’re talking about adaptability in a kind of batch sense where an AI company and a medical device developer can actually update on the basis of received data and updated data. Their tool can better provide care and will be more enhanced with providing decision support for patients.

 

Europe does not have this adaptive system for AI-enabled medical devices. It has some very early discussions of such a system, but these are still some way off.

 

The U.S. will allow a lot of non-critical AI, in the area of broad decision support, to be outside medical device regulation. These tools use AI tools to interpret data and to make diagnostic and treatment recommendations to doctors, based on patients real-time records, use AI to improve EHRs, and to help in decision support including in a non-emergency situation. In this scenario, it creates a situation where the doctor can better and more safely make a critical decision; AI can allow that providing certain safeguards are built in.

 

In order for it be counted as a ‘non-device’ and not to be under the close scrutiny of the FDA, manufacturers need to have an approach where their AI is explainable. It must be explainable for the doctor and explained to the doctor. This is the single most transformational difference between current European approaches and U.S. approaches to the regulation of AI in medicine and digital health in general.

 

The basis on which the AI make their recommendations has to be made clear to the doctor, and the evidence of those decisions has to be made clear to the doctor.

 

Additionally, the interface needs to be designed so that it does not lead the doctor to stop thinking.

 

There is an enormous amount of freedom gained from having support systems that are flexible to adapt without change-by-change regulatory oversight. We are already seeing the impact of that in an ecosystem of decision support built around the EHR system in the U.S. There are many plug ins and tools, and support services for doctors which are built holistically from many providers and sometimes from the big electronic health system providers themselves.

 

I believe the right approach needs to be a wave of innovation because there is an absolutely clear need within our health care systems and patients to want these types of tools.

 

Who do you think should be the final arbiter to ensure a product safety and as well that’s unbiased and sufficiently powerful that it doesn’t cause harm?

 

I do believe the U.S. program has the balance approximately right.

 

The U.S. overall approach is to split non-device and device tools and to ensure the companies have a responsibility to stay within the scope of what is device and what is non-device.

 

The FDA have a responsibility to police those boundaries. Where a company states they are doing something which is a non-device category and they are not, it is the responsibility of market surveillance to oversee this, and in this case, it is a responsibility of the FDA.

 

The low level or intermediate level of support that is provided by these decision support tools to doctors requires the responsibility of a doctor to ensure the basis of information and evidence provided is reliable. It is not a question of simply making a decision. There is a critical responsibility for doctors, and that is understood within the design of the U.S. program.

 

There is a responsibility for the health care system to ensure that what they are introducing works with their staff: to make sure that their staff are trained to use it, and to make sure that they’re overseeing what systems are in place in their hospital from a holistic view. There is a U.S. Act of Congress called the Algorithmic Accountability Act which would formally bring in responsibilities for hospital systems and for medical device developers if it was introduced in law.

 

The responsibilities in the U.S. system are very clear regarding the FDA needing to approve the products. The manufacturers have a responsibility or requirement to produce evidence for the safety and performance of these tools. In Europe, we have a situation where almost everything is under tighter control than the programme than the U.S. will have.

 

Will AI help or hinder staff particularly as they are overtasked already (burn out)?

 

AI has the potential to make everything worse but it also has the potential to make everything better, and we will probably see a balance.

 

There are very few people who are calling for no regulation whatsoever within the health care space. You may see a very small number of entrepreneurs who have a very short term, selfish perspective, - they are simply eager to bring their products onto the market. However, there are not many within the public or political scene that would say the health care system should not be a regulated space.

 

At the moment, medical device regulation has a focus on the individual device. However, that does not work particularly well when you’re considering digital systems and interacting digital systems, and when you’re considering a transformation of the workflow within the health care system for the patient, the doctor and the interaction between doctor and patient.

 

That needs a much wider consideration of how the overall systems are working within hospitals. At the moment, there is so much stress on hospital systems and hospital managers in most countries. In most of Central and Western Europe where we are monitoring or trying to improve or cope with the provision, it’s proven to be very challenging because the staff are always in a situation of extreme stress.

 

In my view, the introduction of AI technologies needs to be considered from a whole system approach rather than from an individual technology perspective.

 

I’m personally relatively optimistic because I believe countries will increasingly start to realise that transformative AI and software technologies are in the pipeline. As more are shown to be safer and further developed, I believe there will be a realisation that the processes to oversee the good introduction of these is to be taken seriously.

 

How do you envision the post-Brexit regulation will be different to that of the U.S. and EU?

 

The UK does plan to introduce approaches like the U.S. in terms of the algorithm change. They have an ambitious program, which may come later, that will be extended to general health software, and not only for AI based software. That will be transformative in the UK.

 

The UK is in an interesting position. They cannot act like Japan or the U.S, so I see them having some innovation but then always being held that they effectively have to be very similar in their regulations to Europe.

 

UK is not a huge manufacturer of medical devices and software medical devices compared to other countries, but their export market to Europe is very important. Therefore, they have to stay close to EU regulations.

 

However, a much larger challenge in the U.S. is determining whether clinical decision support is considered a device or non-device.

 

The FDA have traditionally been very fast to respond whereas the European regulation regulators are, in the experience of many within the industry, very slow to respond to questions, particularly in the period of the introduction of MDR. There is sometimes a reluctance to answer questions quickly and sometimes notified bodies are so busy, or there not enough regulatory bodies to approve individual devices.

 

Additionally, with the new regulations, there is a restriction on the ability of those notified bodies to actually give any feedback and that’s been deliberately introduced in the regulations.

 

I believe the most important aspect for the digital area is this holistic consideration of the interaction of many tools, devices and approaches within our modern and developing health care system. This is something which is addressed in the most interesting way within the draft U.S. legislation, the Algorithmic Accountability Act, where oversight is not only provided by the medical device regulator, but it’s also by the hospital system - the hospital regulators, thus by all of them together.

 

How much do you think the GDP regulations are actually in contrast with the needs of medical technology development regulations?

 

There is a transformative act (the European Health Data Space proposal) which is being discussed on a European level which would actually bring in approaches to allow patients to have access to the electronic health record. They will be able to access their record and download it, and that means they’ll be able to run their own algorithms on it. They may soon be able to pay for services, that it can be provided at low cost, which will assess how their health care has been provided. They will also be able transfer their electronic health record between European countries.

 

Under the European Health Data Space it is likely that patients will have an opt out as to the use of their data for wider research purposes after anonymisation.

 

In Germany there is likely to be a specific opt out approach where patients can opt out of different types of use. The evidence is that sufficient patients would remain opted in which would allow data to be collected for the purpose, for example, of post-market surveillance of medical devices or the surveillance of the health care efficiency of delivery, whilst simultaneously protecting patients’ rights through the ability to opt out.

 

My strong belief is that this can be done with the public and with patients, explaining to them that they can opt in to sharing their data for public good, particularly as another aspect of the changing of health care systems is that health care is increasingly moving to the patient’s home.

 

Conflict of Interest

Stephen Gilbert is an advisor/consultant for Ada Health GmbH and holds share options; has consulted for Una Health GmbH, Lindus Health Ltd, FLO Ltd and Thymia Ltd..

 


References:

Gilbert S et al. (2023) Learning From Experience and Finding the Right Balance in the Governance of Artificial Intelligence and Digital Health Technologies J Med Internet Re.25:e43682