ICU Management & Practice, Volume 25 - Issue 5, 2025
Computer vision (CV) technology offers transformative potential for continuous patient monitoring in healthcare settings. By leveraging artificial intelligence to interpret visual data, CV systems can detect subtle physiological and behavioural changes that may be missed between routine observations. This article explores the evolution of CV from other industries to healthcare applications, examining its role in supporting nursing workflows and physician decision-making.
From Missed Observations to Continuous Monitoring
It was just after midnight when a nurse entered the ICU room and found the patient unresponsive. Earlier in the evening, subtle signs - shallow breathing, disorientation - had gone unnoticed. Despite hourly checks and continuous vital sign monitoring, the deterioration had occurred in the gaps between observations. This tragic but familiar scenario illustrates a fundamental limitation in modern care: human observation, though skilled and compassionate, is not continuous. What if those subtle physiological or behavioural cues could have been detected automatically, 24/7, by a system that never sleeps? That is precisely what computer vision (CV) promises - a revolution in how we observe, interpret, and act in healthcare.
From Retail to Resuscitation: Learning from Other Industries
Computer vision - the use of artificial intelligence (AI) to interpret visual data - has quietly transformed other industries. Retail stores use CV to map customer behaviour. Logistics companies use it for inventory tracking. The automotive industry relies on it for collision avoidance and has high hopes for autonomous navigation. At its core, CV relies on deep convolutional neural networks (CNNs) that process video frames much like the human visual cortex. If we describe it at a higher level, CNNs learn to extract hierarchical features - edges, shapes, movement - and associate them with meaningful events. In healthcare, these same technical architectures can detect movement patterns, posture changes, facial expressions, and even skin colour shifts associated with physiological stress.
Computer Vision at the Bedside: A Nursing Transformation
Whilst much of AI's progress has occurred in diagnostic imaging, the most transformative applications of CV are emerging in nursing workflows and continuous patient observation (Lindroth et al. 2024). Pilot research studies using ceiling or wall-mounted cameras have achieved remarkable results. These developments include the detection of pain and discomfort, signs of delirium, and mood disturbances, like depression. Using pose estimation algorithms such as OpenPose and MediaPipe, systems can track patient movement and classify activity (e.g., bed exit, repositioning, procedure) with frame-by-frame precision. Temporal models and, in particular, Long Short-Term Memory (LSTM) networks can recognise motion sequences over time, enabling alerts to detect changes over time. Nurses and other healthcare team members can use this just-in-time information to provide the right care, to the right patient, at the right time.
Physician Support: From Radiology to Ambient Intelligence
Computer vision has already matched or surpassed human performance in radiology and pathology, detecting lung nodules, fractures and diabetic retinopathy. In critical care, CV is expanding towards the ambient intelligence concept (Figure 1): combining continuous video, sensor, and EHR data to derive insights (Davoudi et al. 2019).

As outlined in Table 1, a lot of diagnostic steps could benefit from automated analysis of visual/video data alone. For validation, traditional diagnostic performance statistics like area under the curve (AUC), sensitivity/specificity, precision/recall are metrics well recognised by clinicians. Computer vision algorithms can be used as a screening tool to "rule out" patients who are not in danger of needing immediate intervention, so efforts can be concentrated on those who will benefit the most at that point in time.

Barriers to Adoption
Despite technological advances, human factors remain the greatest barrier to CV adoption. In our recent survey at Mayo Clinic, 81% of clinicians supported computer vision focused on patient safety, but concerns centred on privacy, legal implications, and trust in AI decision-making (Glancova et al. 2021).
To ensure smooth adoption in expanding CV in clinical care, following implementation challenges should be solved using an established framework, like the Agile Science Roadmap we recently published (Lindroth et al. 2025):
- Data governance: Continuous video streams require secure on-premises processing or encrypted edge computing to comply with HIPAA and GDPR. To protect privacy, edge computing is ideal as images are processed in real-time, minimising the storage of image data.
- Explainability: Clinical users must understand what triggered an alert; black-box models undermine adoption, however, in the late stages of adoption, it is perfectly acceptable.
- Interoperability: Integration with EHRs, alarm systems, and workflow tools remains uneven.
- Culture: Nurses and physicians often fear surveillance; patients fear loss of dignity. Transparency and consent protocols are essential, but with the widespread adoption of CV in other industries, this barrier would be minimised.
To solve these implementation challenges, a broad stakeholder team that includes patients and clinicians is required.
The Path Forward is Augmented Empathy, Not Automation
The next generation of CV in healthcare will be defined by trust, explainability, and co-design, with the following key steps:
- Clinician-in-the-loop learning/development: That is an essential step in CV systems design, where nurses and physicians annotate and validate system outputs to improve models.
- Federated learning: Developers should share model updates without sharing video, protecting privacy.
- Edge computing deployment: Processing data locally on secure hardware to avoid storing video data in the cloud or on local servers.
- Prospective trials: Measuring not just algorithmic accuracy but patient outcomes like fewer falls, faster interventions, and reduced clinician workload.
Ultimately, CV should not replace human vigilance but extend it to provide constant, objective observation that supports the intuition and empathy of clinical care.
Conclusion
Computer vision offers a future where every patient is continuously supported by privacy-preserved technology, where early warning signs and problems are not missed, and where nurses, physicians, and other members of the healthcare team are supported by intelligent systems that extend their monitoring capabilities, detecting what otherwise might be missed.
Conflict of Interest
Dr Lindroth is supported by a National Institute of Health, National Institute on Aging awards (1K23AG076662-04, AG097037-01).
References:
Davoudi A, et al. Autonomous patient monitoring in the ICU using computer vision. Crit Care Med. 2019;47(11):e962-70.
Glancova A, et al. Are we ready for video recognition and computer vision in the ICU? Appl Clin Inform. 2021;12:120-32.
Lindroth H, et al. Review of computer vision technology application in hospital settings. J Imaging. 2024;10(2):45-60.
Lindroth H, et al. Applying an agile science roadmap to integrate and evaluate ethical frameworks throughout the lifecycle and use of artificial intelligence tools in the intensive care unit. Crit Care Nurs Clin North Am. 2025;37(2):347-63.
