Artificial intelligence (AI) is increasingly used in healthcare, from improving the diagnosis of disease to making innovations in treatment. The role of AI in advancing precision medicine was the focus of a recent conference in Boston organised by Harvard Medical School.
You might also like: AI in Medical Imaging May Make the Biggest Impact in Healthcare
The event featured a panel of experts that tackled the problems of applying AI in medicine and the many scientific, political, and ethical questions that must be addressed to ensure its safety and effectiveness. A panel member, Jonathan Zittrain, a Harvard Law School professor, expressed his apprehension that AI in healthcare could be the next asbestos.
“I think of machine learning kind of as asbestos,” Zittrain said. “It turns out that it’s all over the place, even though at no point did you explicitly install it, and it has possibly some latent bad effects that you might regret later, after it’s already too hard to get it all out.” He noted how AI can easily be duped into reaching false conclusions. To illustrate his point, he showed an image of a cat that a Google algorithm had correctly categorised as a tabby cat. The next slide contained a nearly identical picture of the cat, with only a few pixels changed, and this time Google classified the image as guacamole.
The discussions highlighted these key challenges related to the use of AI in medicine.
How to Anonymise Data from Wearables
Massive data sets often are needed to produce algorithms. But data from wearable devices, for instance, can't be easily anonymised, said Andy Coravos, chief executive of Elektra Labs, a company seeking to identify biomarkers in digital data to improve clinical trials. Could genomic data be de-identified? “Probably not, because your genome is unique to you. It’s the same with most of the biospecimens coming off a lot of wearables and sensors...," Coravos said. This privacy issue can't be overlooked as health tech companies gather more data on their customers. Meaningful regulations should govern the collection of these data or the algorithms being used to analyse them for healthcare. If algorithms are the new drugs, she said, shouldn’t they be regulated with the same rigour?
Bias isn’t Just in People, It’s in the Data They Keep
AI is said to be a tool for eliminating bias in healthcare by helping doctors to standardise the way they care for patients. Algorithms are developed to ensure the provision of the most effective care. But AI is just as likely to perpetuate bias as it is to eliminate it, said Kadija Ferryman, a fellow at the Data & Society Research Institute in New York, who noted that bias is embedded in the data being fed to algorithms, whose outputs could be skewed as a result. She cited one algorithm that is used to identify skin cancer that was less effective in people with darker skin. In mental healthcare, data kept in electronic medical records has been shown to be infused with bias towards women and people of colour. “Using AI has the potential to advance medical insights through the collection and analysis of large volumes and types of health data,” Ferryman pointed out. “However, we must keep our focus on the potential for these technologies to exacerbate and extend unfair outcomes.”
Confusing Correlation and Causation
AI is useful in finding correlations within data, helping clinicians and researchers to identify the causes of disease and develop more effective treatment. But Zittrain, the Harvard law professor, highlighted certain spurious correlations that AI has been known to surface. One such correlation, for example, pertains to the number of suicides by hanging or strangulation in North Carolina and the number of lawyers in the state. In another example, the shape of a graph of opium production by year in Afghanistan correlated almost exactly with a silhouette of Mount Everest. The point, Zittrain said, is that a correlation is just a correlation — not a cause. And AI is not so great at distinguishing between the two. Meaningful conclusions are arrived at, he added, with human logic and collaboration.
Source: STAT
Image credit: Pixabay