Ethics is the systematic study of principles of moral “correctness” that guide decision-making when facing moral questions. While physicians undergo rigorous training across multiple dimensions, there is a growing demand for further teaching in ethics to develop competence in assessing ethical problems. The process of making ethically complex decisions is influenced by a physician’s moral development, shaped by individual experiences such as upbringing, religion, and socioeconomic factors. Recent advancements in artificial intelligence (AI) have catalysed transformative movements across various industries, including medicine. This article explores the potential of AI, specifically ChatGPT, in addressing ethical dilemmas in healthcare, and assesses its moral competence using Dr. Georg Lind’s Moral Competence Test (MCT).

 

Understanding Moral Development and Competence

Moral development, as theorised by Dr Lawrence Kohlberg, consists of six stages grouped into three levels: pre-conventional (Obedience and Punishment, Self-Interest), conventional (Interpersonal Relationships, Law and Order), and post-conventional (Social Contract, Universal Principles). Each stage provides a different depth of understanding and approach to moral questions. Moral competence, defined as the ability to consistently deliberate and interpret decisions for ethical problems based on Kohlberg’s stages, is critical in healthcare. The MCT, developed by Dr Georg Lind, objectively measures individual morality using scenarios that isolate moral reasoning from personal opinions and beliefs. This test offers a quantitative analysis of moral assessment, making it an ideal tool for evaluating AI like ChatGPT.

 

AI in Medicine and Ethical Challenges

The integration of AI in medicine has the potential to enhance healthcare delivery across specialities. AI programmes can learn from large datasets, recognise complex patterns, and provide evidence-based recommendations, reducing the limitations of human subjectivity, bias, and time constraints. While AI has advanced in imaging, diagnosis, and treatment, its application in healthcare-related ethical problems is still limited. ChatGPT, a large language model-based chatbot developed by OpenAI, has shown potential in various applications. However, its ability to evaluate ethically complex medical decisions has not been thoroughly studied. This study aims to assess the reliability of ChatGPT’s moral competence when faced with ethical scenarios using the MCT.

 

Assessing ChatGPT’s Moral Competence

To assess ChatGPT’s moral competence, the study utilised the MCT, which includes two ethical scenarios: a healthcare-based “doctor’s dilemma” and a non-healthcare-based “workers’ dilemma.” Each scenario has statements corresponding to Kohlberg’s six stages of moral development, with ChatGPT responding on a 9-point Likert scale. The C-index, a score ranging from 1 to 100, was calculated to evaluate ChatGPT’s ability to apply moral principles consistently. The study compared the performance of ChatGPT 3.5 and 4.0, finding that ChatGPT 4.0 demonstrated higher moral competence. Despite some limitations, such as variability in responses and restricted access to ChatGPT 4.0, the findings suggest that AI can be a useful tool in addressing ethical dilemmas in healthcare.

 

Comparative Analysis of ChatGPT 3.5 and 4.0

The study found that both versions of ChatGPT showed a trend towards higher moral preference in later stages of Kohlberg’s theory for both dilemmas. ChatGPT 4.0 generally exhibited a higher moral competence compared to ChatGPT 3.5. The highest moral stage preference was for Universal Principles (Stage 6), indicating an advanced level of moral reasoning. However, ChatGPT’s performance varied, with some iterations showing low C-index scores, highlighting the need for further development. The consistent evaluation of Law and Order (Stage 4) across both models suggests that AI can reliably assess arguments related to legal principles. However, further improvements are needed for more complex ethical reasoning.

 

Although AI in medical ethics is in its nascent stages, it has the potential to be a valuable tool in the clinical environment. This study indicates that ChatGPT demonstrates medium moral competence as assessed by the MCT, suggesting that it can evaluate arguments based on Kohlberg’s theory of moral development. Future improvements in AI models like ChatGPT could enhance their ability to assist physicians in making ethical decisions. Continued research and development are essential to realise the full potential of AI in addressing the ethical challenges in healthcare, ultimately supporting physicians in providing better patient care.

 

Source: JAMIA Open

Image Credit: iStock

 


References:

Rashid AA, Skelly RA, Valdes CA et al. (2024) Evaluating ChatGPT’s moral competence in health care-related ethical problems. JAMIA Open. 7(3):ooae065.




Latest Articles

AI in healthcare, moral competence, ChatGPT ethics, healthcare ethics, Dr. Georg Lind MCT, medical ethics, Kohlberg's stages, ethical dilemmas, AI moral reasoning Explore AI's role in healthcare ethics. Assess ChatGPT's moral competence using Dr. Georg Lind's MCT, comparing its performance in ethical decision-making.