Picture archiving and communication systems (PACS) anchor radiology workflow and operational efficiency, so replacement decisions affect productivity, user experience and clinical coordination. A large free-standing children’s hospital undertook a year-long market evaluation to determine whether a new PACS would improve efficiency. The department performs about 280,000 studies annually, holds more than 5 million archived studies dating to 2000 and operates within enterprise imaging that includes a vendor-neutral archive and enterprise viewer. The project focused strictly on the clinical Radiology PACS, excluding the enterprise viewer, vendor-neutral archive, research or specialty PACS and reporting software. A six-stage, data-driven approach covered team formation, expectation setting, background assessment, initial vendor assessment, scripted demonstrations and final evaluation.
Defining Scope, Governance and Evaluation Pillars
Governance began with appointment of a committee chair by the Radiologist-in-Chief, who retained final decision authority while the committee issued a recommendation. A 10-member group represented clinical sections, technologist leadership, informatics and a fellow, supported by procurement. A leadership subgroup met weekly to standardise demonstrations, design surveys, analyse results and draft the request for proposal (RFP). Department-wide communications via minutes, emails and open meetings ensured transparency.
Must Read: Navigating Tariffs in Healthcare Procurement
Expectations were framed around three outcomes: remain with the incumbent, select a new vendor or retain the incumbent with planned re-evaluation. Scope remained the clinical PACS, solutions without integrated worklists were considered if an external worklist could be added later. Rather than seek in-year budget approval, the team added a replacement line to the five-year plan and collected indicative quotes.
A background assessment identified operational priorities using 11 weighted pillars and 236 concepts. Concepts such as hanging protocols and first image load time were rated on a 1–5 scale, combined with pillar weights and shared with faculty. This yielded a defensible scoring model for consistent use in subsequent stages.
Shortlisting Through Standardised Demonstrations
The team surveyed the market, consulted peer institutions and reviewed independent evaluations. Ten vendors were targeted for demonstrations meeting, nine scheduled 1.5–2 hour sessions were completed and one was added ad hoc. Questioning followed the predefined pillars across radiologist and technologist workflows, quality control, toolsets, multidisciplinary presentation tools, interruption handling, hanging protocols, communication, administration, infrastructure, cloud readiness and artificial intelligence integration.
Post-meeting debriefs produced pros and cons and a consensus ranking, leading to a shortlist of five vendors including the incumbent. Subsequent virtual demonstrations used a 23-topic script to ensure like-for-like coverage of usability, workflow and administration. Non-incumbents completed 90-minute recorded sessions, the incumbent focused on recent updates and identified gaps. Surveys captured element ratings on a modified 1–5 scale and overall 0–100 scores, with later rank-ordering from first to fifth. Analysis considered mean overall score, summed element ratings, weighted results and mean rank. Three vendors advanced to final evaluation after a vote resolved a tie.
Onsite Testing, Rigorous RFPs and Final Decision
Final evaluation comprised onsite demonstrations with departmental data, an extensive RFP and cost analysis, plus reference calls and follow-ups at the Society for Imaging Informatics in Medicine meeting. Anonymised stress-test datasets included patients with more than 1000 prior studies, cases exceeding 20,000 images, studies with more than 1000 series, mixed modalities and atypical data types. Vendors executed three-day reading room sessions under business associate agreements.
Role-specific instruments gathered 1–5 element ratings and overall 1–100 scores, with qualitative feedback on most liked and disliked features. The RFP contained about 920 targeted questions spanning pricing, migration, architecture, support, implementation, training, upgrades, administration and role-specific workflows. Responses were scored 0, 1 or 2, averaged and scaled to derive an RFP response score, alongside a comparison score from feature rankings. Cost analysis covered five-year total ownership and, where available, cost per study.
Consolidated findings highlighted differences in usability, toolsets, workflow and integration, with strengths including 3D functionality, conferencing support and integration and weaknesses including interruption workflow, user interface, worklists and performance. After two synthesis meetings, the committee recommended continuing with the incumbent while documenting deficiencies and scheduling formal re-evaluation in three years. The Radiologist-in-Chief accepted the recommendation.
A structured, transparent and data-driven process enabled systematic comparison of PACS options and justified the outcome. Weighted pillars and concepts, scripted demonstrations, realistic onsite testing with challenging data, comprehensive RFP scoring and disciplined cost review created a consistent evidence base. Regular communication sustained engagement and clarified rationale. The methodology offers a reproducible approach to imaging informatics procurement that can be scaled to less complex selections while preserving governance, stakeholder input and balanced assessment.
Source: Journal of Imaging Informatics in Medicine
Image Credit: iStock