A UK perspective.
Competence in radiology is established by the end of training by a variety of summative and formative assessments, collated within an e-portfolio alongside annual educational supervisors’ reports and review of competence progression.
In the United Kingdom training in clinical radiology is undertaken as a five-year programme (six years for interventional radiology), with the aim of producing competent practitioners at the end of this period. UK training standards are defined by The Royal College of Radiologists (RCR), and this article describes what must be learned during training, the methods used to assess knowledge and practical skills, and how these systems are implemented. Challenges to successful implementation include those related to the assessments themselves, as well as the need to engage both the teaching faculty and trainees. These challenges and solutions to them will be discussed in this article, as will the additional issue of maintaining the competence of trained radiologists.
You might also like: European Society for Hybrid Medical Imaging Launched
The Royal College of Radiologists produces training curricula for both clinical and interventional radiology, with the first 3 years of core training being undertaken by all trainees. The curricula outline the generic capabilities and clinical attributes required of trainees as they progress through core and specialist training, culminating in the award of a certificate of completion of training (CCT). In addition to the specialist skills trainees have learnt during training, the curricula mandate that they must retain their competencies in general acute radiology, equivalent to physicians being able to manage the ‘unselected take’.
The current college curricula (Royal College of Radiologists 2016a; 2016b) are competency-based, with lists of knowledge, skills and behaviours, which must be achieved and signed off, as well as familiarity with specific disease entities and diagnoses listed under different clinical subjects. This approach, although thorough, has led to trainees having to achieve sign-off in long lists of competencies, which can engender an atomised feel to learning individual items, rather than addressing overall performance of wider practice. This has been recognised by the General Medical Council (2017) and they have mandated that royal colleges rewrite their curricula as outcomes-based documents. The next iterations of the RCR’s curricula will contain high-level outcomes which must be achieved, rather than numerous individual competencies. Examples of outcomes include appropriately selecting and tailoring imaging according to patient need and managing a multidisciplinary team meeting.
Assessments during radiology training comprise both formative and summative evaluations of progress and are organised both locally and nationally. The former tend to comprise pre on-call assessments of competence in reporting plain radiographs, undertaking ultrasound and interpreting CT. Such assessments are usually developed and delivered by individual training schemes (or schools of radiology) and are used to ensure trainees are judged competent to commence on call.
Standardised national assessments mandated by the RCR comprise formative workplace-based assessments (WPBA) undertaken in individual departments and summative examinations (Fellowship of The Royal College of Radiologists – FRCR) delivered by the college. All subjects in the curricula are mapped to a form of assessment, ensuring the most appropriate means of evaluating trainees’ progress are used. Both formative and summative assessments have issues related to their enactment and the interpretation of their results, and these are discussed in the following paragraphs.
WPBA were introduced into radiology training in 2010 and comprise formative evaluations of interpretive and procedural work, teaching, quality improvement/audit and managing multidisciplinary team meetings. Trainees also undertake an annual multisource feedback exercise. These assessments are administered locally according to national guidelines, and the outcomes recorded in a standardised fashion.
Despite this, there are issues with WPBA being interpreted and enacted in varying ways by trainers and trainees. They are initiated by trainees, and can be manipulated to obtain ‘good’ scores by selecting easy cases, requesting assessments in retrospect and approaching assessors perceived as generous. Assessors may collude in such behaviour and will sometimes use the assessments to make a summative judgement, despite their intended formative nature (Ramsden and Roberts 2015). Drivers of such behaviours include trainees wanting to ‘pass’ assessments, despite their formative ethos, and the need to complete target numbers of WPBA prior to their annual review.
In order to try and restore the formative nature of WPBA, the RCR is removing many of the opportunities for trainees’ performance to be scored on the assessment forms, replacing them with free text boxes, thus encouraging the provision of useful feedback, rather than the provision of scores which do not assist trainees’ development.
The FRCR examination syllabus comprises all subjects included in core training, and passing it is a prerequisite to future completion of training. This is the RCR’s formal summative assessment and the means by which it admits trainees to the fellowship and allows use of the post-nominal letters, FRCR. During their first year of training, trainees sit a first examination in anatomy and the scientific basis of imaging, and passing this allows them to sit the second part during or after their third year of training. This examination is subdivided into two parts, an initial test of knowledge (2a), which, if passed, is followed by an assessment of clinical performance (2b). The latter comprises three components: short and long cases, followed by an oral examination.
Administering the FRCR examination raises issues of reliability and validity, both the subject of rigorous quality control by the college’s exam boards. Validity of all parts of the examination is ensured by careful blueprinting to the core curriculum, and all of the examinations (and the questions within them) are carefully scrutinised for consistency between sittings. Inconsistency may be reduced by removing poorly performing questions and (if necessary) adjusting pass marks to take account of more (or less) difficult examinations.
The FRCR 2b is an assessment of trainees’ competence in clinical work under standardised ’test’ conditions, the latter being necessary to ensure that the examination is reliable, rigorous, and delivered fairly to all candidates, whatever their training background.
Assessment of trainees’ actual performance in the workplace is of particular interest, as opposed to evaluation of competence under test conditions, as it is the former which most closely reflects their ability to undertake day-to-day work. Although WPBA is undertaken in the workplace, its variable delivery and the vast number of assessors of varying experience (almost all of whom will be known to the trainee) mean that it cannot be reliably used as a summative assessment.
From both the WPBA and FRCR perspectives, it is hoped that a judgement of trainees’ likely performance may be derived from these existing means of assessment, ensuring that trainees progress appropriately and patient safety is maintained.
One of the most important factors in successfully implementing an assessment system is to ensure that both trainers and trainees are fully engaged, ensuring participation in initiatives such as WPBA. Good communication ensures that both assessors and assessees are kept aware of any changes to assessments, and also optimises faculty development. This is essential to deliver examinations and WPBAs and particularly important with regard to the latter, as training assessors represents a key means of attempting to ‘standardise’ the delivery of assessments enacted through numerous trainers. This represents another means of trying to maintain the formative ethos of WPBA, and encourage the provision of high-quality feedback.
Another initiative used to enhance the provision and recording of assessments is the RCR’s ePortfolio, used by all trainees to record their progress through training. The ePortfolio is used to record the results of all assessments, including supervisors’ reports and the annual review of competence progression (ARCP), both critical informants of trainee progression. The ePortfolio also allows the trainee to record successful achievement of individual competencies, other accomplishments, incidents and their reflections upon them, comprising a complete record of training.
Aside from the development of outcomes-based curricula and the removal of the majority of WPBA scoring, the RCR is engaged in a major review of its assessment of clinical performance (FRCR 2b). The purposes of the examination have been articulated as the assessment of radiology knowledge and its application, observational and analytical skills, and communication. The review seeks to optimise the examination in order to achieve these objectives, recommending enhanced blueprinting to ensure full curricular coverage and the use of some anchor cases, seen by all candidates, to enable greater standardisation and allow comparison between candidates.
A further development planned for the examination is the introduction of domain-based scoring, mapped to the purposes outlined above. The purposes are broken down into generic domains, with clearly defined positive descriptors allowing examiners to divide candidates into excellent, clearly passing, borderline and failing categories based upon agreed criteria used during the discussion of a series of imaging cases.
Outside of the examination, the RCR is developing procedure-based assessments (PBAs), predominantly for the assessment of practical work in interventional radiology. These are detailed ‘stepwise’ assessments of trainees’ performance of specific practical procedures, and unlike other evaluations performed in the workplace, may be used both formatively and summatively.
Maintenance of competence
Following certification of the completion of training, radiologists are expected to maintain their competencies, and separate systems are in place to facilitate this, The RCR administers a system of recording credits awarded to radiologists undertaking continuing professional development (CPD) by varying methods, including attending courses and meetings, authoring or presenting research and experiential learning. Radiologists are expected to achieve annual or 5-year targets of points awarded for various activities to show their engagement with CPD, although demonstrating a direct relationship between such engagement and maintenance of competence is difficult.
More direct methods of maintaining clinical competence include participation in departmental learning and discrepancy meetings and other forms of peer development organised between colleagues. The RCR encourages the practice of peer review of a percentage of each radiologist’s cases, and although this has been successfully introduced in some areas (eg breast), it has not been introduced throughout radiology, largely due to workforce pressures. In addition, the RCR publishes a regular Radiology Errors and Discrepancies (READ) newsletter (Royal College of Radiologists 2012-) based upon cases submitted by individual radiologists, which demonstrate particularly useful or critical learning points. Plans are in place to change the newsletter title to Radiology Events and Learning (REAL) in order to highlight areas of good practice. By these means of sharing both discrepancies and areas of excellence, radiologists seek to maintain and improve their practice by various means of peer learning.
All of these activities (CPD and peer learning) feed into radiologists’ annual appraisal and 5-yearly revalidation, and demonstrating engagement with them helps practitioners show evidence of maintaining their competencies. Although these processes have a wider scope than assessment of clinical competence, the latter form part of the process, and radiologists can be assisted by discussing their learning needs and including them in their personal development plan (PDP) if necessary. By these means proficiency may be maintained, and radiologists may aspire to excellence by developing their practice in specific areas.
The RCR curricula have multiple forms of assessment mapped to them, including formative WPBAs and summative examinations. Although the latter represent the high-stake assessments undertaken during training, using the results of all assessments undertaken by a trainee has the potential to give a broad-based (and hopefully more authentic) evaluation of their performance in the workplace. The challenges of administering such an assessment system have been outlined in the key points and text, with differing issues affecting WPBAs and examinations.
Systems to enable the maintenance and assessment of competence following the completion of training are less prescriptive, although radiologists are expected to achieve CPD targets and engage in peer learning, engagement with which feeds into appraisal and revalidation.
- Outcomes-based curricula designed to ‘produce’ competent diagnostic or interventional radiologists with retention of acute skills
- Assessments are based on local requirements or national curricular formative workplace-based assessments (WBAs) and summative examinations
- Challenges of performance assessment include standardisation, manipulation and purpose
- Challenges of summative assessment include validity and reliability, blueprinting to the curriculum, fairness, quality assurance and assessment of performance
- Successful implementation requires engagement of trainers and trainees, communication of changes, educator development, facilitation of delivery and encouragement of participation
- Maintenance of competence is through continuing professional development, aspiring to proficiency and excellence.