HealthManagement, Volume 23 - Issue 6, 2023

img PRINT OPTIMISED
img SCREEN OPTIMISED

Oliver Kimberger, Professor at the Medical University of Vienna, shared his insights with HealthManagement.org, highlighting the evolving relationship between Artificial Intelligence (AI) and the established medical culture. These insights were shared in the context of his session at the 2023 Patient Safety Conference, titled “Culture in organization and AI: Is it compatible with current medical culture?”

 

Is there potential for AI to enhance and safeguard the safety and quality of healthcare delivery?

Yes, I’m quite certain there is. However, we currently face a slight challenge in terms of the limited presence of AI in the ICU and OR. While there are few approved and commercially available applications that offer decision support in the ICU, the options are still limited. In the OR, there is, for example, one available application based on an AI algorithm to inform you in advance if there will be hypotension in patients. Although the availability is currently limited, the potential is exceptionally promising. I’m confident that we will see a rise in the development and deployment of such applications in the near future. In the first instances, they will primarily serve as helpers for busywork and quality assurance tools. Secondly, they will play a role in the prediction of disease trajectories and the optimisation of therapy.

 

How willing are healthcare professionals to embrace AI technologies in their daily work, and what is their level of trust in AI?

The current level of trust in AI is moderate to low, primarily because people tend to view AI as a black box algorithm, unsure of how it works. Moreover, health professionals are not used to working with them and have not received education during their medical study about AI and how it works. Consequently, trust-related concerns arise, including fears that the data collected during daily treatment may compromise privacy, both for patients and physicians.

 

Some individuals also worry that failing to follow AI recommendations might lead to legal problems. Furthermore, they are unsure about AI being forensically responsible for its own decisions. A possible future role for AI might be that of an informed advisor who acts like a senior colleague whom you also do not blindly trust. And, just like experienced colleagues, AI can also make mistakes, though it may often be right.

 

How do patients and the general public feel about using AI in healthcare?

I’m not entirely certain if people are fully aware of this concern. From the patient’s perspective, there’s a fear that increasing AI involvement might result in further alienation from doctors. They worry that they may no longer see a doctor as the first point of contact but, for instance, an AI for initial triage. If you were to ask people on the street about their fears regarding AI, some might express concerns about the growing distance between physicians and patients.

 

On the other hand, physicians are afraid of the potential for de-skilling. They worry that people might lose essential skills, just as we’ve become reliant on GPS in cars and can barely navigate without it. Similarly, there could come a day when people can’t interpret medical data because it was always done by AI, and should AI not be there anymore, they no longer possess the skills. This is why I believe it is important that AI is not regarded as a substitute for physicians but as a supplement to medical practice.

 

What strategies can be employed to ensure the safe and effective integration of AI in medical practice and to ensure the culture in healthcare organisations is compatible with AI technology?

I believe that education is paramount. Doctors and healthcare professionals need to know what AI is, what it can and cannot do and how it functions. They don’t need to grasp the actual programming or algorithm formulas, but they should have knowledge of the basic concepts. Furthermore, it’s crucial for them to recognise that AI isn’t something you can blindly trust, but it is also not something you need to be able to distrust.

 

Developing a critical approach to AI is key, and this should be integrated into their education. This educational shift isn’t limited to physicians alone; it should extend to all healthcare professionals, including nurses, physiotherapists, and others. It’s a cultural change that involves everyone. It’s not sufficient for AI to learn from a limited dataset because that would make it applicable only to a specific population. Instead, AI should be trained on diverse datasets to ensure it optimises its algorithms comprehensively. This way, there won’t be any problems where underprivileged or underrepresented groups are disproportionately affected by AI.

 

How are issues related to patient privacy and bias in AI algorithms being addressed?

They are not addressed enough. The limited availability of open databases is indeed a problem. Many algorithms learn from these databases, but as they are limited, they inherently carry societal biases. These biases reflect the limited database and may not represent the diversity we need.

 

We must be aware of this problem. You can address it by seeking larger and more diverse databases. However, it is important to note that if a database already includes bias, no amount of calculation can completely eliminate this bias. The only option is to work with even larger and more diverse databases. This doesn’t mean we should avoid using these databases altogether, but we must be aware of this limitation and address these biases in publications and the development of algorithms.

 

Are medical schools and healthcare institutions providing adequate education and training in AI for healthcare professionals? Is this something that we will see progress?

A medical university in Vienna is taking steps to incorporate digitalisation of medicine and AI into its curriculum. This reflects a growing recognition of the importance of these topics in modern healthcare. We are also working on a master’s programme in digital medicine (https://digital-skills-jobs.europa.eu/en/ds4health). Starting such a programme demonstrates a commitment to staying at the forefront of medical education. It’s an exciting development for students who can choose to do it after their medical studies.

 

What changes in medical culture are expected as a result of the integration of AI into healthcare?

The integration of AI in healthcare brings about improvements in various aspects, including quality, adherence to guidelines, and administrative efficiency. However, I think we just have to take care that we do not see too much de-skilling when these methods are implemented.

 

One notable benefit of AI in healthcare is the reduction of administrative burdens. Tasks such as coding for patients and generating patient summaries, which take up a lot of the doctors’ time, can be automated with AI. AI not only streamlines processes but also liberates healthcare professionals from time-consuming paperwork. It will allow doctors to do some actual healthcare professional work, concentrating on delivering patient care without being glued to the paperwork.

 

How will the roles and interactions of healthcare professionals change and evolve with the integration of AI in healthcare?

There are two options. On the one hand, the less favourable option is that, overall, there’s a global shortage of healthcare professionals, and AI and digital medicine will just compensate for this workforce shortage. In such a scenario, the improvement in patient care may not be there at all. On the other hand, AI will improve the quality of care, adherence to guidelines, and efficiency, freeing healthcare professionals from mundane work like coding and administrative tasks. In turn, they will have more time to spend with patients, and it will not just be about making AI a workforce compensation but a valuable tool for improving patient outcomes and experiences.

 

Conflict of Interest

None.