Andrew Smith, FRCA, MRCP
Royal Lancaster Infirmary and Institute for Health Research,
Lancaster University, UK
The ‘patient safety’ movement has come about through an appreciation that the techniques and approaches that have been used to improve safety in other industries can be applied to healthcare (van der Schaaf, 2002). The underlying assumption in this approach, as in ‘safety science’ in general, is that systematic scrutiny and analysis of problems using rigorous methods can bring benefits.
A Framework for Studying Safety
To analyse accidents and incidents, a structured way of thinking about them is needed. Reason’s model of accident causation is well known in the field*. It suggests that there are many potential accidents and many potential contributing factors, but that most potential accidents are prevented from becoming actual accidents by a series of controls or barriers. When the controls fail, the accident that has been ‘waiting to happen’ can occur. Many factors can contribute to the genesis or development of an accident:
• a patient,
• an individual staff member,
• some aspect of the team, including communication amongst its members
• education and training,
• equipment and resources,
• working conditions and
• organisational and strategic issues.
These factors may act as influencing factors or as causal factors. Removing an influencing factor might not have prevented the accident, but it should improve the safety of care in general. Clearly, if a factor actually caused the accident, removing it should greatly reduce the risk of repeating the accident.
Barriers may be physical (e.g. keypad-controlled doors), natural (e.g. allowing time to pass before moving to next stage of process), human (e.g. checking the temperature of a bath before immersing an elderly patient) or administrative (e.g. protocols and procedures). Physical barriers are the most reliable in terms of providing failsafe solutions to safety problems. Natural barriers, whilst less effective, generally provide a more robust solution than human action and administrative barriers. However, in healthcare, there is a predisposition to relying on human action and administrative type barriers as solutions to problems.
Many intensivists are familiar with critical incident studies – a neighbouring speciality of anaesthesia which pioneered the use of critical incident reporting in healthcare. Creating a culture where incidents are readily reported may not be straightforward. Often, when something has gone wrong, many presume that a mistake must have been made and they will be blamed or disciplined. However, the reporting of incidents allows their analysis and possible identification and elimination of contributing factors. This technique is called root cause analysis (RCA) and, over the years, it has led to significant gains in safety in many fields of human activity.
Prospective Techniques for Identifying Risks
However, although RCA is an effective technique and can prevent repetition of a given event, it is obviously a reactive process, taking place after harm has been done. New working practices, new equipment, collective forgetting and the pure capriciousness of chance all mean that the potential for new problems is always present. Prospective methods of risk identification complement the retrospective approach by attempting to tackle unforeseen hazards. An overall risk assessment strategy asks four questions:
1) ‘What can go wrong?’
2) ‘How, and how often?’
3) ‘How bad?’
4) ‘Is there any need for action?’
This last question is important. Often, the risk can be reduced and some action must be taken. Sometimes, however, successful risk management depends on learning to live with risk** (Institute of Risk Management, 2003). This can be difficult but if a systematic appraisal has been conducted, then this decision can be seen as the right one as supported by evidence.
A commonly used framework for analysis is to assess the effect (or severity) of the risk, its likelihood, and what controls or barriers exist to reduce it. The choice of method to use depends on a number of factors, including the level of perceived risk and possibility of its mitigation, capabilities of staff, availability of data and type of system. Further, different techniques may be applicable at different stages of the same project, with more structured tools becoming necessary as a project progresses. Techniques within both groups can be classified into those with a ‘top down’ approach which start with potential hazardous outcomes and work backwards to analyse possible contributory factors and those with a ‘bottom up’ approach which start with processes or potential causes and try to predict the potential hazards, which could arise from them. These techniques vary considerably in complexity, need for training in their use, degree of structure and quantification, and so on.
An example of the use of these principles, relevant to intensive care, was published in 2004. Apkon and colleagues from a paediatric ICU in the United States used the technique of failure mode effects analysis (FMEA) to design safer processes for intravenous drug infusions (Apkon et al. 2004). FMEA is a tool originally developed by reliability engineers for the systematic evaluation of a complex process, the identification of elements that risk causing harm and the prioritisation of remedial measures. It estimates failure rates from various sources, including published literature, direct measurement and perceptions based on experience. The information is then used to predict the behaviour of a system. Apkon and colleagues put together a multidisciplinary team of staff including a pharmacist, intensivists, nurses and an epidemiologist. The team then identified the ways in which each element of the drug delivery process might fail. They characterised the elements or steps in the process,and for each element scored (1) the severity of failure should it not be detected (2) the likelihood of occurrence and (3) the likelihood that the failure will escape detection before causing harm. Multiplying the scores for the three elements together yielded a risk priority number, which allowed actions to reduce risks to be prioritised. Having identified that the biggest risks lay in the calculation of the correct infusion rate, in the bedside preparation of the infusion and in programming the infusion pumps, the team made a number of changes to improve the process. The hospital’s existing computerised drug order system was modified to create a database of standard solutions. Calculations and formulation were also transferred to the computerised system. Pre-manufactured
solutions were used wherever possible or infusions were prepared to order in the pharmacy. This allowed longer ‘hang times’ for infusions and, as the risk of an adverse event is related to the frequency of opportunity, thus reduced the overall risks still further.
My second example is within the wider field of perioperative care. A structured ‘what if’ technique was used to identify the non-operative risks associated with elective surgery under general anaesthesia in adults. It was co-ordinated by the UK National Patient Safety Agency in 2004 (Adedokun et al. 2006). Again, a group of participants from different backgrounds in healthcare was brought together to think systematically about potentialhazards in current processes. To guide the process, ‘what if’ questions were raised (e.g. ‘What if the wrong premedication were given?’, ’How could process X go wrong?’, ‘Is it possible to…?’, ‘Has anyone ever been able to…?’). Members of the group were then asked to grade each risk they had identified according to its likelihood and severity, and these were incorporated into a risk matrix, used to rank the risks in order of importance. The study yielded a number of areas for attention, including perioperative hypothermia, neuromuscular blockade, training in airway maintenance devices and removing distractions for the anaesthesiologist. The main drawback was the time required. In many safety-critical industries, staff members are required to take part in such exercises and time is made available.
This article outlined the variety of prospective riskanalysis techniques available. Taking part in the processes of analysis does not only help tackle the specific risks identified, but it also has a beneficial effect on clinical and support staff, as their views and perception of all kinds of safety issues are often changed for the better. The techniques are simple in principle and I encourage readers to try them for themselves. I would be happy to be contacted for further help and advice.