The current landscape of artificial intelligence (AI) in clinical diagnostics is characterised by significant advancements and the ongoing refinement of AI tools, particularly in imaging and pathology. As these proprietary systems evolve, the focus is shifting from mere diagnostic accuracy to differentiating between comparable AI tools to enhance diagnostic capabilities and improve clinical decision-making. This shift necessitates the development of bespoke quality assessment tools to evaluate various factors, including the risk of bias, the use of index tests, and the applicability of these tools in real-world clinical settings.

 

The Growing Regulatory Approval of AI Devices

Regulatory bodies, especially the US Food and Drug Administration (FDA), have been increasingly approving AI-enabled medical devices, with hundreds currently authorised devices. These devices primarily aid in the detection of lesions or abnormalities, predominantly in radiology. For example, FDA-approved lesion detection devices for screening mammograms such as ProFound AI Software (iCAD, USA), Transpara (ScreenPoint Medical, Netherlands), and INSIGHT MMG (Lunit, South Korea) employ distinct AI technologies. Ongoing independent trials are appraising these AI tools in screening mammograms, and preliminary results are promising. These findings underscore that the future challenge lies not in integrating AI into clinical workflows but in determining which AI device is most clinically useful for specific applications or populations.

 

Addressing Biases in AI Diagnostic Accuracy Studies

Despite the promising developments, current systematic reviews of AI diagnostic accuracy often lack consistent quality assessment standards. Most reviews utilise the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool, which does not fully capture biases unique to AI technology. Potential biases in AI diagnostic accuracy studies include the use of large-scale open-source repositories, inadequate external validation, inconsistent reference standards, and unclear reporting of timing between index tests and reference standards. The challenge for regulatory bodies is to assess the safety and efficacy of AI devices within the context of these biases. The FDA has streamlined the approval of AI devices through the 510(k) pathway, which relies on the substantial equivalence of new devices to previously approved ones. However, this approach may perpetuate existing biases and hinder the assessment of clinical utility.

 

To address these challenges and enhance the evidence synthesis for AI diagnostic studies, the development of an extension to the QUADAS-2 tool, named QUADAS-AI, is underway. This new tool will focus on biases specific to AI diagnostic accuracy studies and be developed through an international consensus. Robust and transparent evidence synthesis through tools like QUADAS-AI will be crucial for ensuring the quality, safety, and value of AI diagnostic tools in clinical practice. As AI technologies continue to advance, their integration into healthcare will depend on rigorous evidence appraisal and the careful consideration of their real-world applicability and potential biases.

 

Source: The Lancet

Image Credit: iStock

 




Latest Articles

AI in diagnostics, AI imaging tools, FDA AI approval, AI diagnostic accuracy, QUADAS-AI, AI pathology, regulatory approval AI, AI clinical decision-making, AI bias assessment, AI in healthcare Explore the advancements and regulatory landscape of AI in clinical diagnostics, focusing on imaging, pathology, and bespoke quality assessment tools.