A recent study examined public concerns about AI replacing human workers. The research also uncovered cultural differences in how people perceive AI’s involvement in six key professions: doctors, judges, managers, caregivers, religious leaders, and journalists.

 

The study surveyed over 10,000 participants from 20 countries—including the United States, India, Saudi Arabia, Japan, and China—who assessed these professions based on eight psychological traits: warmth, sincerity, tolerance, fairness, competence, determination, intelligence, and imagination. Respondents also evaluated AI’s ability to replicate these traits and shared their fears about AI taking over these roles.

 

Findings suggest that when AI enters a profession, people instinctively compare the human qualities required for that role with AI’s perceived capabilities. The level of fear is closely tied to the mismatch between these traits and AI’s ability to replicate them.

 

The study highlights significant differences in fear levels across countries. India, Saudi Arabia, and the United States reported the highest concerns, particularly regarding AI replacing judges and doctors. In contrast, participants from Turkey, Japan, and China expressed the lowest levels of fear, suggesting that cultural factors—including historical experiences with technology, media narratives, and AI policies—play a crucial role in shaping public attitudes. Germany’s responses fell in the middle, reflecting cautious optimism about AI’s integration into society.

 

The study also found job-specific differences in fear levels. AI judges were the most feared across nearly all countries, reflecting concerns about fairness, transparency, and moral judgment. AI-driven journalists, on the other hand, were the least feared, likely because individuals can choose how they engage with news, whereas judicial decisions offer limited personal discretion.

 

Roles such as AI doctors and caregivers also generated strong concerns in some countries due to AI’s perceived lack of empathy and emotional understanding. This aligns with earlier research on AI managers, which found that people react more negatively to AI managers than to AI co-workers or AI-assisted tools. The resistance was particularly strong in leadership areas requiring human qualities like empathetic listening and respectful communication. 

 

The study underscores a crucial link between public fears and the perceived mismatch between occupational expectations and AI capabilities, offering a framework for culturally sensitive AI development. Study authors note that adverse effects can follow whenever AI enters a new profession. The key is to minimise these effects, maximise the benefits, and ensure an ethically acceptable balance. 

 

By understanding what people value in human-centric roles, developers and policymakers can create AI technologies that foster trust and acceptance. The authors highlight that a one-size-fits-all approach overlooks critical cultural and psychological factors, which could hinder AI adoption across different societies. 

 

The study suggests practical approaches to easing AI concerns. For instance, fears about AI doctors lacking sincerity could be addressed through increased transparency in decision-making and positioning AI as a support tool rather than a replacement for human practitioners. Similarly, concerns about AI judges might be mitigated by developing fairness-enhancing algorithms and launching public education campaigns to demystify AI decision-making processes.

 

Fear arises when there is a gap between AI’s perceived capabilities and the skills required for a role. Overall, countries like India, Saudi Arabia, and the U.S. report higher fear levels, especially regarding AI judges and doctors, while Japan, China, and Turkey show lower concern. Designing AI systems that align with public expectations is crucial to fostering trust and adoption. Strategies such as transparency, fairness-focused AI development, and public education can help mitigate fears and facilitate AI integration into society.

 

Source: Max Planck Institute for Human Development

Image Credit: iStock

 


References:

Dong M, Conway JR, Bonnefon J-F et al. (2024) Fears about artificial intelligence across 20 countries and six domains of application. American Psychologist. 

Dong M, Bonnefon J-F, Rahwan I (2024) Toward human-centered AI management: Methodological challenges and future directions. Technovation. 131, 102953. 



Latest Articles

AI, Culture, Artifical Intelligence, mind perception, algorithmic aversion Fears About Artificial Intelligence: A Country-Level Analysis