Integrating artificial intelligence (AI) in medical diagnostics, particularly in radiology, promises to revolutionise clinical decision-making. However, the pace at which AI technologies are being developed and commercialised significantly outstrips our understanding of their practical value for clinicians. This rapid development has created an "AI chasm," a gap between technological advancements and their effective application in clinical settings. This article explores the cognitive aspects of clinician decision-making, the differences between human and AI processes, and the implications for the future of medical AI.

 

The Role of Cue Utilisation in Clinical Reasoning

Clinical reasoning, especially in radiology, heavily relies on identifying and interpreting cues from medical imaging. Radiologists use a process known as cue utilisation, which involves recognising patterns and cues that are not always apparent to novices. This expertise allows radiologists to quickly and accurately assess medical images by focusing on clinically relevant features, such as nodule brightness or specific symptoms. The ability to utilise these cues efficiently is a hallmark of expertise in the field, enabling radiologists to form a "clinical gestalt" or an intuitive understanding of a medical situation based on experience and environmental context.

 

This method of reasoning aligns with the concept of ecological rationality, where the surrounding environment and the immediate context guide decision-making. It allows for rapid decision-making without the need for an exhaustive analysis of all available data, contrasting sharply with the comprehensive data processing often used in AI models.

 

Bounded Rationality and Decision Making

Bounded rationality is a concept from cognitive science that describes the limitations of human decision-making processes. Unlike traditional rationality, which assumes that individuals strive for the optimal decision using all available information, bounded rationality acknowledges human cognitive and time constraints. Clinicians often use satisficing strategies, making decisions that are "good enough" given the available information and environmental context. This approach is highly dependent on the clinician's experience and the specific context of the clinical setting.

 

In contrast, AI systems operate without these contextual limitations. They process vast amounts of data and identify patterns or correlations that may not be immediately apparent to human clinicians. However, this capability also means that AI can sometimes use extraneous or irrelevant data points that are not clinically significant, leading to potential pitfalls in decision-making.

 

Debounding in AI Models and Its Implications

AI models in medical diagnostics often undergo a process called "debounding," where decisions are made using all available information, irrespective of clinical context. This process involves two main stages: labelling and modelling. During labelling, human experts reduce complex clinical decisions into simplified labels, often losing important contextual information. In the modelling phase, AI systems use these labels to learn patterns from the data, which may include irrelevant or misleading cues.

 

This debounding can lead AI systems to make decisions based on factors that are not clinically relevant, such as non-clinical image artefacts. As a result, there is a fundamental mismatch between how clinicians and AI systems process and prioritise information, complicating the integration of AI into clinical practice. This mismatch poses significant challenges, including the risk of clinicians over-relying or under-relying on AI outputs, potentially leading to errors in diagnosis and treatment.

 

Integrating AI into medical decision-making, particularly in high-risk areas like radiology, requires a nuanced understanding of both human and AI cognitive processes. Clinicians operate within an ecologically bounded framework, making decisions based on experience, context, and available cues. In contrast, AI systems, bound by the data they are trained on, may lack the ability to discern the clinical relevance of certain patterns or features.

 

Future research must focus on better understanding the cognitive aspects of clinical decision-making in the context of AI use to bridge the AI chasm. This includes developing models that account for the bounded rationality of clinicians and the dataset-bound nature of AI. By addressing these cognitive considerations, we can enhance AI systems' safety, usability, and effectiveness in clinical settings, ensuring that they complement rather than complicate the decision-making process.

 

By understanding and addressing these cognitive aspects, the medical community can better navigate the integration of AI technologies, ensuring they are used effectively and safely in patient care.

 

Source: Lancet Digital Health

Image Credit: iStock

 


References:

L Tikhomirov, C Semmler, M McCradden (2024) Medical artificial intelligence for clinicians: the lost cognitive perspective. Lancet Digital Health. Volume 6, ISSUE 8, e589-e594, August 2024



Latest Articles

AI in medical diagnostics, clinical decision-making, radiology AI, cognitive gap in AI, medical AI integration Bridging the AI gap in medical diagnostics: exploring cognitive decision-making in radiology. Understand human vs. AI processes for clinical accuracy.