System C Healthcare,
Email: [email protected]
‘How many doctors,’ goes the old joke, ‘does it take to change a light-bulb?’ There are plenty of possible answers to this riddle. ‘It depends if the bulb was insured,’ is an American response. ‘One to change the bulb, another to give a second opinion, and a third to draw up the bill,’ goes an alternative. Here in England we have a one-word answer: ‘Nurse!’
How many doctors then, does it take to design the rules and processes for a clinical computer system? Ask this question in England where the NHS is trying to deploy a national network of new systems, and you might find that the answer is somewhere around 109,000. Every doctor, it sometimes seems, expects to be consulted. Lack of clinical engagement is the most common criticism aimed at the English National Programme, from doctors who feel disenfranchised by the whole exercise.
Of course software suppliers cannot practically consult their entire potential userbase, so where do they begin? Imagine a single deployment of, for example, an order communications system into one average-sized hospital. How many clinicians will really be consulted for this deployment, to the extent that they will be asked to make real decisions on the way that the software will operate? Surprisingly, it seems that in most deployments, very few clinicians roll up their sleeves and get involved. They simply can’t spare the time. There may be one or two from each laboratory specialty who will define the tests in the orderables catalogue, and will help design the order forms that appear for each test and the clinical questions that appear on each form. One or two people from Radiology will do a similar job. Then a project doctor and a project nurse might review the way that the results will be displayed, and they might comment on the format of labels and they might have some input along with the pathologists into the order forms. And that’s about it. Clinical engagement will have included a dozen people, at the most, out of a clinical workforce of over a thousand. Yet it seems that in nearly every case like this, users are happy to trust their colleagues and cries of, ‘no one consulted me,’ are rarely heard.
But for a big programme like the one in England, that just won’t do. Consultation has to be wide. Software has to be built to reflect best practice, and that means that national bodies and associations (like the Royal Colleges) have to become involved. It also has to accommodate ways of working that may differ between institutions. A system designed to meet the practice of a big teaching hospital might prove unworkable in a small, general hospital.
And it has to flex to allow different clinical preferences to emerge. Imagine asking two diabetologists to design an on-screen form to collect clinical information appropriate to a diabetes admission. “It is like herding cats,” a software designer told me once, “all you get when you ask the opinion of two doctors is three opinions.”
Is that impression true? There are layers of problems here. If software is to meet the needs of doctors, then doctors need to have some serious input into the way it is built or configured. But if the software providers are to consult effectively then the clinicians need to understand the system well enough to understand the impact of their decisions, and that takes time.
Doctors are not known for their liberal possession of spare time. Those who volunteer to participate may need to be backfilled to cover the clinical duties they will have to miss, and this is often something that hospitals have not budgeted for. Suppliers worry that those clinicians who do find time to engage will turn out to be the awkward ones, unrepresentative of the wider community, pursuing a narrow agenda of their own. And the only way to protect against that is to find more clinicians. But other clinicians don’t have the time.
One big service provider in England, faced with the challenge of deploying a clinical system across more than one hundred institutions, tried to address the issue face on. Anxious to consult with clinicians across twelve key specialties, they issued invitations to hospital clinicians and threw a lavish two-day launch event at a luxury conference centre. They were rewarded with 70 attendees including 40 doctors. It was a big effort to net a very modest percentage of active doctors, and the company was eager to make the best use of the people they had found. A dozen expert reference groups were established, facilitators from the company were nominated chairpersons were elected, meetings were scheduled, and the process of clinical engagement had begun. It proved difficult to sustain the momentum.
Diary dates were fiendishly hard to agree; many clinicians lost interest once it became clear that they were signing up for a long-drawn-out process. But, despite some rocky moments, the programme worked well enough to generate a raft of clinical decisions. An additional generic group was drafted to address decisions that crossed specialty boundaries. And to the surprise of most of the people involved - clinicians and software builders alike - consensus was never difficult to reach. While it might be going too far to describe the process as herding sheep, the image of herding cats turned out to be false.
When you consult doctors (and nurses and allied health professionals too) in the context of a properly managed forum, with sufficient background and understanding to enable the people involved to understand the implications of their decisions, then good decisions do get made, and they get made quite quickly.
Of course, not every software issue lends itself to consensus. Occasionally a particular point of contention defied agreement. One such sticking point was this: should an order form for a laboratory test allow the doctor to alert the laboratory that this specimen might be a biohazard?
The software supplier was clear enough – the computer system could include such a message if the users wished. But the users were divided. Some felt that it wastheir professional duty to colleagues in the laboratory to caution them if there was any suspicion that a specimen might represent a particular risk. HIV was probably highest in their minds. They argued that this reflected the un-computerised process in many hospitals where a specimen label might have an ‘alert’ message, and the phlebotomist would ‘double-bag’ that specimen as an extra precaution. Others took a contrary view. They argued that every specimen should be treated with maximum precaution. Labelling some specimens would encourage lazy practice. A third group were worried about the breach of confidentiality that a ‘biohazard’ label would represent. This was a profound difference of opinion, not based upon awkward local practice, but on soundly held principles. It was not an issue that could be resolved by democracy. The Royal College declined to support one view or the other and so the software company was instructed to change the software to allow either approach to prevail.
The lesson for software suppliers is clear. Do not be afraid of engaging with your clinical users, but make sure that you manage the process well. Ensure that the clinical users have enough time to complete the task, even if that means arranging for locum support. Make sure that the clinicians understand your system thoroughly. The user’s fear of an unknown system is often at the heart of their resistance. Keep engagement sessions short (2- 3 hours) but make them frequent.
Document decisions exhaustively. Finally, be sure to communicate effectively to all those clinical users who were not part of the process. As often as not, their demands to be consulted are little more than pleas to be kept informed.
It all reminds me of one more light-bulb joke. How many psychiatrists does it take to change a light-bulb? Only one, but the