The adoption of artificial intelligence in healthcare continues to expand, bringing significant potential benefits alongside new risks and regulatory expectations. Ensuring responsible development, deployment and use of AI technologies requires structured governance built on clear rules, processes and technical safeguards. In response to growing international regulatory activity, including guidance from global and federal bodies and emerging legal frameworks, healthcare delivery organisations must translate high-level principles into operational capabilities. A structured approach to governance can help organisations align AI solutions with clinical priorities, regulatory requirements and ethical standards while promoting patient safety and transparency. A People, Process, Technology and Operations framework has been developed to guide healthcare delivery organisations in establishing scalable and sustainable AI governance. The framework identifies core organisational capabilities across four domains, offering a practical roadmap for evaluating readiness and implementing oversight mechanisms across the AI lifecycle.

 

Building Governance Through People and Structure

Effective AI governance begins with a clearly defined governance committee supported by specialised subcommittees overseeing implementation and monitoring, quantitative assessment, ethics and legal oversight, and operational execution. This structure distributes responsibility across clinical, technical, social, informatics, operational and regulatory domains, ensuring multidisciplinary expertise informs decision-making. Committee members are expected to contribute domain-specific insight, from evaluating clinical risks and workflow implications to assessing model performance, data quality, security, privacy and regulatory compliance.

 

Clinical representatives provide frontline knowledge and assess operational value, while technical experts evaluate model performance, user interfaces and system architecture. Social and informatics experts advise on workflow integration, human-computer interaction and data readiness. Operational members address strategic alignment, financial implications and long-term sustainability. Ethics and legal specialists ensure compliance with internal policies, privacy laws and broader regulatory requirements.

 

The framework emphasises regular updates to committee membership to prevent concentration of control and to build institutional capacity. Ongoing education and training are also required, covering AI mechanisms, procurement processes, validation methods, monitoring practices and evolving regulatory landscapes. This continuous learning approach supports adaptability and shared understanding as technologies and policies evolve.

 

Establishing Oversight Across the AI Lifecycle

The governance committee operates through a systematic process spanning the full AI lifecycle, from problem identification and procurement to development, clinical integration and lifecycle management. All AI products used within the organisation should be registered with the governance committee to maintain visibility and oversight. Following registration, the committee determines the appropriate level of governance, distinguishing between limited and full oversight based on risk.

 

Must Read:AI Adoption Grows as Scaling and Risks Persist

 

Products subject to limited governance receive minimal review and may bypass subsequent decision points after initial assessment. In contrast, products under full governance undergo comprehensive review at defined decision points throughout development, validation, deployment and monitoring. This risk-based stratification enables proportional oversight aligned with potential clinical and operational impact.

 

The framework calls for a standardised risk assessment methodology to support oversight decisions and for clear delineation of responsibilities between the governance committee and departmental teams. Documentation and transparency are central to this process, ensuring traceability of decisions and accountability across stages. By formalising lifecycle oversight, organisations can balance innovation with patient safety and regulatory compliance while maintaining structured evaluation at each stage of AI adoption.

 

Technical Infrastructure and Operational Sustainability

Robust technical infrastructure underpins effective AI governance. The framework specifies capabilities required across the AI lifecycle, including systems for solution registration and tracking, document management for model artefacts, cohort specification and data extraction tools, and secure computing environments for development and retrospective validation. Secure environments may include de-identified data access, central processing units, graphics processing units for certain model types, common data models and integration of multiple data sources.

 

For clinical integration and lifecycle management, organisations should establish real-time validation environments with data quality monitoring, metadata management, notification systems, outcomes monitoring and operational monitoring. Dashboards and reporting tools are required to visualise inputs, outputs and performance metrics, while web service integrations support timely data access within electronic health records and other health IT systems.

 

Cost allocation and ownership of technical infrastructure must be clearly defined. Decisions regarding capital investment, departmental cost responsibility and baseline IT services form part of governance planning. Documentation of infrastructure ownership ensures clarity and accountability, particularly where systems are managed by specific clinical departments.

 

The addition of an Operations domain distinguishes this framework from traditional models. Operational governance requires an executive sponsor positioned below the chief executive level to coordinate and report on governance activities. Budget planning encompasses upfront capital costs, project-specific needs and ongoing governance operations, with sustaining costs representing a defined proportion of the total budget. Measures of success are defined across efficiency, adaptability and safety and effectiveness, enabling continuous evaluation. Engagement with patient community members further strengthens transparency and trust.

 

The People, Process, Technology and Operations framework offers healthcare delivery organisations a structured approach to establishing AI governance aligned with regulatory expectations and organisational priorities. By defining capabilities across multidisciplinary leadership, lifecycle oversight, technical infrastructure and operational sustainability, the framework translates high-level principles into actionable organisational requirements. It also provides a basis for assessing readiness and identifying capability gaps across four domains. Real-world application has demonstrated its practical utility in formalising governance structures and policies. Structured governance supported by clearly defined roles, processes and infrastructure will remain central to ensuring safe, effective and equitable integration of AI into healthcare delivery.

 

Source: npj digital medicine

Image Credit: iStock


References:

Kim JY, Hasan A, Balu S et al. (2026) People process technology and operations framework for establishing AI governance in healthcare organizations. npj Digit Med: In Press.



Latest Articles

AI governance healthcare, artificial intelligence regulation, healthcare AI lifecycle, AI risk management, clinical AI compliance, digital health governance, AI oversight framework AI governance in healthcare: a structured People, Process, Technology and Operations framework ensuring compliance, safety and scalability.