A recently published study describes the results of a pilot peer review programme for radiologists, which was  integrated into the workstation in an outpatient practice. 

The software used was Intelerad Peer Review, which was integrated into the Radiology Information System (RIS). Cases were randomly selected. If an appropriate prior study was available, and if the reviewing and original interpreting radiologists had not exceeded review targets, the case was scored using the
modified RADPEER system.

2,241 cases were randomly assigned for peer review. Of these, 1,705 (76%) were interpreted. Reviewing radiologists agreed with prior reports in 99.1% of assessments. Positive feedback (score 0) was given in three cases (0.2%) and concordance (scores of 0 to
2) was assigned in 99.4%, similar to reported rates of 97.0% to 99.8%. Clinically significant discrepancies (scores of 3 or 4) were identified in only 10 cases (0.6%). Eighty-eight percent of reviewed radiologists found the reviews worthwhile, 79% found scores appropriate, and 65% felt feedback was appropriate. Two-thirds of radiologists found case rounds discussing significant discrepancies to be valuable.

The authors conclude that radiologists should consider peer review to reduce errors and improve consistency. While the cost-effectiveness of peer review needs further study, for now it is the primary quality assessment tool in radiology.

«« Global Market for Mammography Equipment to Reach US$610 million by 2018


Fat in Organs and Blood May Increase Risk of Osteoporosis  »»

References:

O'Keeffe MM, Todd M Davis, Siminoski K . A workstation-integrated peer review quality assurance program: pilot study.  BMC Medical Imaging 2013, 13:19 doi:10.1186/1471-2342-13-19.



Latest Articles

Peer-review A recently published study describes the results of a pilot peer review programme for radiologists, which was integrated into the workstation in an outpat...