As technology continues to evolve, the role of Artificial Intelligence (AI) in decision-making processes continues to grow. Today, humans have become dependent on algorithms to process information, guide behaviour and draw conclusions. 


In a new study, researchers study how humans react to AI decision-making and explore the question: "is society ready for AI ethical decision making?" They do this by studying human interaction with autonomous cars. The findings are published in the Journal of Behavioral and Experimental Economics


Study researchers conduct two experiments. In the first one, they use 529 human subjects with an ethical dilemma. The car driver is in a situation where a collision is unavoidable, and he has to decide whether to crash the car into one group of people or another. The collision would cause harm to one group but would save the lives of the other group. The human subjects in the study had to rate the decision of the human driver and the AI driver. The goal of this experiment was to measure the bias people might have against AI-ethical decision-making.


In the second experiment, the researchers used 563 human subjects and asked them to respond to certain questions. Two scenarios were presented to the participants. The first scenario involved a hypothetical government that had decided to allow autonomous cars to make ethical decisions. The second scenario allowed the participants to vote on whether to allow autonomous cars to make ethical decisions. The subjects could choose to be in favour or against the decision made by AI. 


Findings from these experiments showed that when participants were asked to evaluate the decision of the human driver and the AI driver, they did not have a definitive preference. But when they were asked to give their opinion on whether a driver should be allowed to make ethical decisions on the road, the participants were strongly against AI-operated cars.


According to the researchers, the discrepancy between the two results is a combination of two elements. First, people generally believe that society as a whole does not want AI ethical decision-making. Therefore, when they are asked to state an opinion individually, they are more confident in their views as this is what they believe is the opinion of the majority. 


The second element is the country/location. In areas where people are more trusting of their government and where there are strong political institutions, the decision-making power of the participants when evaluating ethical decisions of AI is stronger. However, in regions where people don't trust their government and where the political institutions are weak, the decision-making ability of the participants when evaluating AI-ethical decision-making is also weaker. 


Overall, the researchers observe a social fear of AI-ethical decision-making. This fear is not intrinsic but is more a rejection of AI from what the individuals believe might be society's opinion. If people are not asked explicitly to state their opinion, they will not demonstrate any bias against AI-ethical decision-making. But if they are asked, they tend to show an aversion to AI. In addition, the acceptance of AI is greater in developed countries and less in developing countries.


Source: Hiroshima University

Image Credit: iStock 


«« Virtual Reality Exercise Improves Mental Wellbeing


EIT Health Catapult 2022 Names Health Start-Up Winners »»



Latest Articles

decision-making, AI, Artifical Intelligence, ethical issues Are We Ready for AI-Ethical Decision Making?