It is clear therefore that healthcare AI faces this important challenge: how can it win the same level of trust as doctors. "When our experience in healthcare is focused on human care, being treated by machines will seem alien – how AI is implemented and how it interacts with patients is an issue that needs to be addressed," says Marcus Smith, managing director EMEA at Polecat, which is a leader in reputation intelligence.
Unlike doctors whose conduct is guided by the Hippocratic Oath, AI cannot pledge to act in our best interests and carry out best practice. The technology however can be subject to regulation and scrutiny, in the same way doctors are regulated and responsible for their actions. This is not an easy issue, Smith points out, considering that the distinction between whether culpability lies in the application or creation of the technology is not clear-cut.
Another issue confronting healthcare AI relates to patients’ privacy. While doctors swear to keep patients’ medical record private, they are not exposed to cyberattacks and are not controlled by a company. With data use now under greater regulatory scrutiny under GDPR and other data protection laws, Smith says there is still a tension when giving machines access to sensitive patient information. This is an issue that AI device manufacturers need to address.
"Constant communication, engagement and transparency around current issues such as culpability and privacy are essential, alongside monitoring for emerging debates as the technology and its applications advance. Only by proving safety and effectiveness in the long-term, will firms encourage acceptance and adoption," Smith adds.
Image credit: Pixabay