HealthManagement, Volume 24 - Issue 1, 2024

img PRINT OPTIMISED
img SCREEN OPTIMISED

 

A conversation with Harvey Castro, author of "ChatGPT and Healthcare: Unlocking The Potential Of Patient Empowerment," about new technologies, what they mean for healthcare in 2024 and what leaders need to know before they can fully embrace innovation.

 

Key Points

  • By leveraging the power of generative AI, hospital systems can improve scheduling patients and help optimise appointment workflows.
  • Physicians can turn to AI tools to help them with their differential diagnosis or as a feedback loop to ensure they haven't missed anything.
  • Many different platforms are now available with different flavours of generative AI technology, but ChatGPT has cemented its place as the leader of the pack.
  • The synergy between human expertise and AI promises exceptional results, setting a new standard in healthcare provision.

 

In the rapidly evolving landscape of healthcare, digitalisation continues to revolutionise how patients receive care, professionals deliver services, and organisations manage operations. The momentum of digital transformation in healthcare shows no signs of slowing down. From ChatGPT to virtual reality, the trends shaping this new technological era are diverse and impactful. In our conversation with Harvey Castro, we went over what these new technologies mean for healthcare in 2024 and what leaders need to know before they can fully embrace innovation.
 

What was your first impression of ChatGPT technology, and how does it relate to healthcare today?

 

I started playing with ChatGPT when the tool first came out in November 2022. My initial amazement quickly evolved into a need to share my experience on this new subject with the world. I published my book "ChatGPT and Healthcare: Unlocking The Potential Of Patient Empowerment" in February 2023.
 

ChatGPT is like a person trying to be the smartest and most helpful they can be. You can talk to them anytime, you can ask any question, and this AI will always reply with an answer. ChatGPT is literally a database, consolidated with everything the internet has to offer, and the AI went through all this data. However, the problem with this technology is it can make mistakes. Much like a very intelligent but very proud friend, ChatGPT can not come back with an answer to your question, and that answer might be correct or wrong; the tool doesn't have the capability to say it doesn't know. If the human user is unaware that some answers can be mistakes, they will fall into traps and rabbit holes and start taking incorrect data at face value. In my eyes, a human expert is always needed in the equation to weed through ChatGPT answers to dissect and filter the informational output.
 

As for applications to healthcare, the low-hanging fruit is just using ChatGPT for virtual assistance or telemedicine. By leveraging the power of generative AI, hospital systems will be better at scheduling patients and help optimise appointment workflows. Another use I witnessed is clinical support. Even if healthcare administrators might not have approved this specific clinical use, physicians can turn to AI tools to help them with their differential diagnosis or as a feedback loop to ensure they haven't missed anything. ChatGPT offers supplemental help to some physicians after a long shift when fatigue sets in.
 

What sets ChatGPT aside from other forms of AI that already exist in healthcare?

 

ChatGPT, or the generative AI technology in general, didn't come out of the blue; it was actually invented by Google years ago. OpenAI's originality comes from the fact that leveraging its partnership with Microsoft enabled it to scale up its tool very quickly and bring it first to the next level. OpenAI was the first to put this technology in the hands of the general public, and their tool's popularity soared. Even though this technology existed before, nobody from the public got to experiment with it first-hand. This explains the sudden viral popularity and then adoption, and this sets aside ChatGPT's strategy from its competitors. Many different platforms are now available, different flavours of generative AI technology, but to my eyes, ChatGPT cemented its place as leader of the pack, as it brings so many different technology aspects to one single website.
 

Do users need specific expertise to use ChatGPT? Are there any pitfalls they should be aware of?

 

Generative AI is a good tool, but a strong educational background is needed to use this technology in the best way. If users do not know how the technology works or are not equipped to recognise the best practices, they'll rush into it without even realising they're doing it wrong.
 

First comes the issue of bias. Whereas AI algorithms do not experience bias per se, the information that was trained into this AI might have introduced some bias from the human opinions that went into establishing the knowledge database. The geography and time range of knowledge can also introduce bias and warrant the need for human expertise when using ChatGPT. For example, if the data used to build ChatGPT capabilities was focused on Europe and North America, the relevance of AI’s answers would change depending on the user’s location. Whereas a European physician would recognise their situation in the answers provided, an African physician would be confronted with the right answers, but for the wrong population, in terms of culture, available resources, and health care access, the tool would ultimately be useless. The passing of time can also be a source of bias, and knowledge evolves, but stored data doesn't. When asking ChatGPT how to perform a specific medical procedure, answers could be dated and not reflect the current or latest recommendations. Answers were right at some point, but ChatGPT might not have had access to the most recent data, leading to partially incorrect information with potential adverse consequences. Essentially, ChatGPT can only be as good and its answers as relevant as the information the tool can access.
 

Generative AI should be used carefully, and users must be made aware of these technological limitations. Patients should not replace their physicians with ChatGPT queries, as medical expertise is needed as a guardrail for information quality.
 

Do you have examples of hospitals or health systems using generative AI?

 

In New York, a network of hospitals utilised predictive analytics through their AI system. What's notable is that they trained their AI with data from their specific population and hospital visitors, tailoring their tools specifically to their patients. Each hospital caters to a unique demographic, influencing their perspectives and actions, leading to varied outcomes. They aggregated patient data into their AI, resulting in a tool for doctors to access predictive analytics. This tool analyses past patient data to categorise current patients and predict their likelihood of readmission. This empowers providers to reconsider discharge decisions, potentially preventing unnecessary returns. Conversely, it can also support confident discharge decisions for lower-risk patients. This integration of AI into healthcare delivery is ground-breaking, enhancing doctors' capabilities rather than replacing them. The synergy between human expertise and AI promises exceptional results, setting a new standard in healthcare provision.
 

What would be the best partnership between AI and physicians - clinician augmentation rather than replacing them?

 

To put it simply, AI plus humans is better than just AI. Leveraging the full power of AI needs the human element, especially in healthcare. Healthcare is about empathy, and the human feelings physicians experience when they see a patient are hard to put into an algorithm. Touching a patient, feeling their skin, and looking at their eyes, all the anamnesis brings a lot of data that ChatGPT can't access. However, if this data is put into AI, then we can benefit from the predictive analysis capabilities of ChatGPT but tailored to a specific patient and their own data points. Associating the strong predictive analysis capabilities of AI with the data collected during physician-led clinical anamnesis is the key to bringing out the maximum potential of AI and delivering better healthcare to patients.
 

What can be done by health organisations to ensure proper data governance?

 

Data quality is crucial for AI tools, as answers provided by AI can only be as good as the knowledge the algorithms can have access to. In the example provided by New York hospitals, the data is tailored to the use case, collected from the same population as the one the AI will be used for. Quality data collection is the needed stepping stone towards quality answers from the AI tool.
 

Also, governance is needed to ensure that data quality remains on time. There is something called data drift: past or current relevance of an AI tool does not guarantee that results will stay true in three or six months. Data evolves, and to ensure that AI tools keep delivering their best outputs and are of use to patients and physicians, data scientists need to keep an eye on the data. Governance procedures must put guardrails and feedback loops in place to continuously check if the AI keeps validating what it should be validating and is not drifting. ChatGPT users and healthcare organisations must continually analyse the data and understand that data changes over time. Special attention should be given to AI tools provided by start-ups, who might not have the necessary resources to keep refining their tool and counteract data drift.
 

There is a dilemma to mitigate between innovation speed and necessary governance. Moving forward quickly towards new technologies and evolving out of outdated healthcare methods is crucial for improving patient outcomes; however, technological adoption should not go unchecked. Patient safety, data security, and ethical responsibility must be prioritised when adopting new technologies while also steering away from a government intervention that is too heavy-handed and could impede progress.
 

Ethical guardrails are already in place: HIPAA, recommendations, and best practices. Healthcare professionals are already aware of these and have their patients’ interests at heart. Physicians and nurses should be leveraged as patient advocates in all new technology implementation projects, and they need to be involved in decisions at the board level to ensure that new technologies serve patients effectively. Collaboration between healthcare professionals, boards, and vendors is key. Ultimately, this inclusive approach can mitigate the need for additional layers of governance. Healthcare tends to be conservative, favouring gradual change over sudden upheaval. Therefore, healthcare professionals will naturally apply brakes when necessary, taking the time to understand the risk-benefit profile before rushing into implementing change.
 

What are the regional disparities and legal complexities for AI adoption in healthcare?

 

Adopting AI in hospital systems is not uniform and depends on factors like region and demographics. In tech-savvy areas like Silicon Valley, California, hospitals are more receptive to AI tools due to frequent exposure to such innovations. Conversely, rural hospitals may resist change and view AI tools with scepticism. Different medical specialties also have varying attitudes towards AI. For instance, radiology embraces AI for interpreting scans, while other fields may be more cautious.
 

Hospital systems aim for a balance between adopting cutting-edge tools for competitive advantage and ensuring patient welfare and financial viability. However, navigating the legal implications of AI use presents a dilemma. Depending on the circumstances, there's a risk of litigation whether AI is employed or not.
 

For example, a colleague's ER uses AI to detect stroke symptoms from CT scans, alerting doctors in real time. While this aids in prompt intervention, it also raises legal questions. If a hospital lacks such technology and a patient suffers adverse outcomes, legal repercussions may follow, alleging negligence.
 

Moreover, the American Medical Association advocates for healthcare professionals' involvement in AI adoption decisions. Transparency about AI's capabilities, risks, and benefits is crucial for informed decision-making among medical staff.
 

As AI technology advances rapidly, regulatory bodies like the FDA also play a role in overseeing its use in healthcare. The evolving landscape of AI integration into medical practice presents complex ethical, legal, and professional considerations that require careful navigation.
 

The governance of AI in healthcare requires collaboration between healthcare professionals, hospital leadership, and vendors. Education also plays a crucial role in the successful adoption of AI. Hospital executives and healthcare professionals need to be taught what this technology can do, how to use these new tools and the associated risks. For example, cloud data storage can be at increased risk of a breach: understanding this could prompt health organisations to develop their AI solutions on local servers instead. Education and knowledge transfer between stakeholders can help mitigate many risks while implementing AI tools.
 

What do you think the rapid evolution of AI can bring to healthcare in the future?

 

The rapid evolution of technology in healthcare promises ground-breaking advancements that may soon revolutionise patient care. Imagine a future where robots with advanced AI capabilities, such as those developed by ChatGPT, can analyse vital health metrics in real-time. These robots could seamlessly integrate into medical settings, providing instant feedback to healthcare providers during patient interactions. For example, during a conversation with a patient, a healthcare provider could inquire about their health metrics, and the robot would promptly provide accurate information without the need for additional tests or examinations.
 

Additionally, recent developments from the Ambient.AI platform, which aims to act as a scribe during doctor-patient interactions, offer further insights into the potential of AI in healthcare. Although current technology may still require some refinement, the pace of innovation suggests that we could witness significant improvements in clinical decision support and note-taking tools within the next few years.
 

These advancements have the potential to transform healthcare delivery, allowing healthcare professionals to focus more on patient care and less on administrative tasks. With AI support, healthcare providers could receive valuable insights in real time, enabling them to make more informed decisions and provide better patient outcomes. As technology continues to evolve, the possibilities for improving healthcare are limitless, offering hope for a future where patient care is more efficient, accurate, and personalised.
 

Apple's Vision Pro is 2024 hot news; how will it impact healthcare?

 

Apple's latest virtual reality computing device, the Apple Vision Pro, holds immense potential for transformative impact within the realm of digital health. Apple has consistently disrupted various industries with innovative tech, and the healthcare sector is no different. The Vision Pro merges a high-resolution display with voice, hand, and eye-controlled interfaces, offering to dismantle the barriers between our physical and digital worlds. Embedded within this technical and engineering progress lies an extensive potential for transforming healthcare.
 

The emergence of Vision Pro could herald a new era of telemedicine, making it more interactive and immersive. This tool would enable patients to access their digital doctor's office effortlessly from the comfort of their home or on the go. Inclusivity is also at stake, as the headset could allow better access to medicine from underserved or remote communities. Immersive VR could also tackle the issue of patient education and engagement by elucidating complex medical terminology using interactive 3D models. Aided by the device, patients could explore and learn about medical procedures, health conditions, or treatment plans, sharing a common virtual space with their physician. Vision Pro has the potential to revolutionise how doctors share and discuss health data with patients using real-time health data visualisation in both virtual and physical consultations. Areas of concern could be highlighted and annotated in real-time, and even surgical procedures could be demonstrated by healthcare professionals in virtual reality. The integration of Apple Vision Pro into healthcare could be as impactful as the introduction of electronic health records or the dawn of telemedicine. It paths the way towards an unparalleled level of interaction and comprehension between patients and their health data.
 

Reimagining patient care in this new light, far away from 2D screens and obsolete interfaces, is paving the way for more meaningful, interactive patient-doctor interactions, but the transition presents challenges. As for many emerging technologies, issues of accessibility, cost, and widespread adoption will need to be addressed, and a common sustainable framework for implementation will be co-created among all stakeholders, medical voices, and tech innovators united to better patients' standard of care and tailored even more individualised treatment pathways.
 

The implications of VR for doctor-patient interactions are endless, but it's not the only area that could benefit from this promising technology. Medical education, and more generally, knowledge transfer, could be radically changed by the irruption of virtual reality. Virtual reality can now serve as an active training tool for aspiring surgeons and a platform for practicing operations in a simulated environment. Enterprises such as Osso VR and ImmersiveTouch provide VR solutions for surgeon training and skill refinement, surpassing conventional methods. A pivotal Harvard Business Review study revealed a 230% performance enhancement among VR-trained surgeons compared to their traditionally trained counterparts. These VR-trained surgeons demonstrated superior speed and precision in surgical procedures. VR has the potential to enhance medical education significantly, both in performance and inclusivity, as only a limited number of students could observe surgeries first-hand, hindering comprehensive learning and widening the differences between local curriculums.
 

With VR cameras, surgeons can livestream operations globally, enabling medical students to immerse themselves in the operating room through VR headsets. Case Western University pioneered this approach, utilising devices like the HoloLens to teach human anatomy without cadavers, albeit employing Mixed Reality technology. 2023 has seen numerous examples showcasing the integration of VR (and/or AR, XR, etc.) in medical training.
 

Even the traditional world of medical conferences could be shaken up by VR. Dr Brennan Spiegel, a fervent VR advocate in medicine, delivered an entire MedEd lecture in VR in 2017. He continues to utilise VR in presentations at the annual Virtual Medicine conference. The use case for VR headsets in medical conferences for increased audience engagement and content quality is well established, with interactivity, 3D visualisation, and gamification. Yet, the adoption of extended realities remains limited and is still seen as a futuristic gadget.
 

Conclusion

The year 2023 witnessed a profound transformation in healthcare, powered by digitalisation, and the trend will keep strong in 2024, ushering in an even deeper bond between healthcare and digital innovations. Like Apple's Vision Pro, many technological innovations will have long-lasting impacts on how we manage patient care. As healthcare stakeholders continue to embrace these innovations, collaboration, innovation, and a commitment to patient-centricity will be critical for realising the full potential of digitalisation in healthcare and delivering on the promise to enhance access, efficiency, and quality of care.
 

Conflict of Interest

None.