In the rapidly evolving field of artificial intelligence (AI) research, especially within healthcare, it is critical to ensure transparency, reliability and reproducibility. Although procedural and reporting guidelines aim to structure and communicate scientific findings, the challenges of data leakage and reproducibility issues persist. The FAIR principles—Findable, Accessible, Interoperable and Reusable—offer a promising approach to improving transparency and trustworthiness. A recent review published by JAMIA Open explores the CALIFRAME framework, which seeks to integrate the FAIR principles with existing reporting guidelines to enhance reproducibility in AI-based medical research.

 

Integrating Reporting Guidelines and FAIR Principles

The CALIFRAME framework is built on the notion of “calibrating” existing reporting guidelines with the FAIR principles without fundamentally altering the guidelines themselves. The framework systematically aligns the essential elements of established reporting guidelines with the FAIR principles to enhance the clarity and reliability of AI research outputs. This alignment focuses on ensuring that all scientific data and outputs are findable and accessible by both humans and machines. This is especially relevant in AI-driven medical studies where precise and transparent reporting can significantly impact AI models' perceived effectiveness and reproducibility.

 

The development of the CALIFRAME framework followed a “Best Fit” approach to framework synthesis. This approach enabled the identification and merging of relevant components from existing guidelines to form a cohesive, FAIR-aligned framework. For instance, the Consolidated Standards of Reporting Trials-AI extension (CONSORT-AI) and the Research Data Alliance (RDA) FAIR Data Maturity Model were combined to form a FAIR-calibrated reporting guideline.

 

Defining the Calibration Process

The calibration process of CALIFRAME consists of several stages. The initial stage involves identifying an appropriate reporting guideline and a compatible FAIR assessment tool. Once the guidelines and FAIR metrics are selected, the next stage requires a detailed mapping of the reporting guideline’s elements with the corresponding FAIR principles. For instance, components related to data accessibility within the reporting guideline are mapped to the accessibility metrics of FAIR.

 

This mapping is essential in identifying commonalities and gaps. Where the guidelines do not sufficiently align with the FAIR principles, supplementary items or indicators can be introduced. During this process, a diverse group of experts can contribute insights and validate the alignment. This collaboration not only helps refine the framework but also ensures a transparent and comprehensive calibration.

 

Application and Benefits of the Framework

The practical application of CALIFRAME was demonstrated through a use case involving clinical trials with AI components. The selected CONSORT-AI guideline was mapped with the RDA FAIR Maturity Model to generate a calibrated guideline. This process involved evaluating commonalities and bridging gaps where the guideline failed to fully incorporate FAIR principles.

 

The benefits of this approach extend beyond aligning AI-based studies with the FAIR principles. By calibrating reporting guidelines, researchers can promote a more transparent scientific process, foster collaboration and contribute to a culture of shared knowledge. The CALIFRAME framework also offers practical improvements in data management and reporting practices, ensuring that AI research outputs remain trustworthy and reproducible.

 

The CALIFRAME framework presents a novel approach to address the ongoing challenges of transparency and reproducibility in AI research within the medical field. By integrating the FAIR principles into existing reporting guidelines, this framework enhances the clarity and reliability of scientific outputs. The calibration process not only aligns AI research with open science standards but also provides a structured approach to improving data management practices. Through its practical application, CALIFRAME has the potential to foster a more collaborative and transparent scientific environment, paving the way for better AI research practices in healthcare and beyond.

 

Source: JAMIA Open

Image Credit: iStock

 


References:

Shiferaw KB, Balaur I, Welter D et al. (2024). CALIFRAME: a proposed method of calibrating reporting guidelines with FAIR principles to foster the reproducibility of AI research in medicine. JAMIA Open, 7(4), ooae105.



Latest Articles

AI transparency, CALIFRAME, FAIR principles, medical AI research, reproducibility, healthcare AI, reporting guidelines Enhance transparency in AI research with CALIFRAME, aligning FAIR principles with reporting guidelines for reproducible medical AI studies.