Artificial intelligence (AI) is seen driving what is regarded as the new phase of automation. Advocates say it is hard to think of any area of our lives that will not be affected by this nascent data driven technology. AI is already in healthcare too; for example, Google’s Deep Mind has taught machines to read retinal scans with at least as much accuracy as an experienced junior doctor. Meanwhile, a new project funded by the British Heart Foundation aims to develop a machine learning model for predicting people’s risk of heart attack based on their health records.
However, a new report from the UK Academy of Medical Royal Colleges has warned that AI won't solve all the problems facing the healthcare sector. The report, "Artificial Intelligence in Healthcare", commissioned by NHS Digital, looked at the clinical, ethical and practical concerns surrounding AI in the health and social care system in the UK.
The paper is not meant to be an exhaustive analysis of all the potential AI holds or what all the implications for clinical care will be. Moreover the authors have, of necessity, limited the time horizon to the next few years. The report thus serves as a starting point for clinicians, ethicists, policy makers and politicians among others to consider in more depth.
Based on the study, the Academy identified seven key recommendations for politicians, policy makers and service providers to follow. This included that such figures and organisations “should avoid thinking AI is going to solve all the problems the health and care systems across the UK are facing”. As the Academy noted, AI in healthcare "has hardly started" despite the claims of some high-profile players.
The report also addressed concerns of medical professionals about losing their jobs and being replaced by machines or robots. The authors pointed out that claims AI can replace specialist clinicians are unlikely, but future doctors may also require training in data science.
The Academy's other key recommendations included:
- AI must be developed in a regulated way between clinicians and computer scientists to ensure patient safety
- Clearer guidance around accountability, responsibility and wider legal implications of AI
- Data should be more easily available across private and public sectors for those who meet governance standards
- Transparency of tech companies in order for clinicians to be confident in the tools they are using
- AI should be used to reduce, not increase, health inequality – geographically, economically and socially.
Source: Academy of Medical Royal Colleges
Image source: Pixabay