Unnecessary treatment raises risks and costs without benefit. Evidence from a controlled decision task with practising clinicians shows that decision support from artificial intelligence reduces unnecessary prescribing and improves accuracy across common payment environments. Clinicians worked through realistic consultations with full histories and multiple options while the availability of AI recommendations and the incentive structure for prescribing were varied. A complementary exercise with medical students clarified behavioural mechanisms linked to knowledge, incentives and patient welfare. Together, the results indicate that AI support shifts choices towards appropriate therapies, curbs additional medications that add no value and raises decision quality across different financial contexts. 

 

Task Setup and Payment Models 

Participants completed virtual consultations based on standardised scenarios that offered five options, only one optimal. Up to two medications could be prescribed, capturing the tendency to add unnecessary therapies. Decisions affected a payoff for the clinician and a linked donation for a patient-facing charity. AI assistance, when present, provided a structured diagnosis and a specific recommendation generated by ChatGPT 4.0, which achieved 73.93% accuracy on the relevant examination bank. 

 

Must Read: Multimodal AI Reshapes Biotechnology and Digital Medicine  

 

The task varied incentives across three settings that shape the marginal return to additional treatments. A flat scheme kept payoffs constant regardless of treatment volume, mirroring salaried environments. A progressive scheme raised payoffs with each additional treatment, reflecting volume-rewarding models. A regressive scheme reduced payoffs as unnecessary treatments were added and aligned payoffs with the patient-linked donation, representing an overuse-penalising model in which clinician and patient interests move together. This design enabled a direct comparison of how AI assistance interacts with fixed, volume-rewarding and overuse-penalising environments. 

 

The main sample comprised 196 practising physicians from a hospital in Wuhan who completed the scenarios under different combinations of assistance and incentives. To illuminate behavioural channels, 120 medical students undertook the same task. This parallel sample helped quantify the contribution of knowledge gaps and other factors to unnecessary treatment, allowing the analysis to distinguish between effects driven by ability, incentives and beliefs about patient welfare. 

 

Clear Effects on Choices, Quantity and Accuracy 

Clinicians were markedly receptive to AI recommendations. When AI recommended an option, the probability of prescribing that option rose by 25.7–28.4 percentage points, with the largest effect under the flat scheme. Receptiveness was stronger in clinical domains where baseline familiarity was lower, indicating greater influence when knowledge constraints were more likely to bind. This pattern redirected choices towards clinically appropriate options while preserving clinician discretion. 

 

AI assistance reduced treatment quantity as well. Across incentive schemes, the probability of unnecessary treatment fell by 10.9–25.7 percentage points, equal to relative reductions of 15.2–80.3 percent. Absolute and relative reductions were greatest under the flat scheme, where incentives neither reward nor penalise extra treatments. Reductions were smaller yet present under the progressive scheme, showing that AI can counteract, though not fully offset, financial pulls towards additional prescribing. Under the regressive scheme that penalised overuse and aligned payoffs with patient donations, AI contributed to parsimony within an environment already oriented to appropriateness. 

 

Accuracy improved alongside lower unnecessary treatment. Across schemes, AI assistance increased the probability of selecting the clinically optimal therapy by 9.8–13.3 percentage points, corresponding to 14.6–19.9 percent improvements. The largest absolute gain in accuracy occurred under the regressive scheme, consistent with conditions that reinforce appropriate choices by aligning clinician and patient interests. Taken together, these effects show that AI did not merely reduce the number of treatments but helped concentrate choices on the right treatment. 

 

The interaction of guidance and incentives followed a consistent logic. In the absence of volume-linked rewards or penalties, AI guidance acted as a salient prompt against adding treatments that do not improve outcomes. Where activity generated higher payoffs, AI still nudged prescribing towards restraint, but the magnitude of change was smaller. Where payoffs tracked patient welfare by design, AI amplified a pre-existing push towards appropriateness, delivering the largest absolute accuracy gains. 

 

Drivers of Unnecessary Treatment 

A complementary exercise with medical students using the same task identified the main drivers of unnecessary treatment. Indicators of limited knowledge accounted for the largest share of explained variation, followed by monetary incentives, considerations of patient welfare and defensive behaviour. AI support reduced the knowledge-related component, while payment design continued to influence the extent of change in prescribing. These findings align with the observed pattern that AI effects are strongest where knowledge is relatively constrained and where incentives do not push for higher volumes. 

 

Across both samples, the mechanism evidence and the behavioural responses point in the same direction. Guidance operates primarily by addressing knowledge limitations and uncertainty at the point of decision, whereas financial incentives shape how far clinicians move towards restraint or accuracy in response to that guidance. The results therefore separate the informational role of AI from the motivational role of payment schemes, showing that each contributes independently to prescribing choices. 

 

AI decision support reduced unnecessary treatment and improved accuracy across fixed, volume-rewarding and overuse-penalising payment environments. The largest reductions in unnecessary care appeared under fixed pay, and the largest accuracy gains where clinician and patient interests were aligned. Mechanism evidence points to knowledge limitations as the dominant contributor to unnecessary treatment, with incentives also important. The study’s implication is to pair AI support with payment designs that emphasise appropriateness and align interests so that guidance translates into consistent, high-quality prescribing while maintaining clinical agency. 

 

Source: Journal of Health Economics 

Image Credit: iStock


References:

Wang Z, Wei L & Xue Lian (2025) Overcoming medical overuse with AI assistance: An experimental investigation. Journal of Health Economics; 103:103043. 



Latest Articles

AI decision support, unnecessary treatment reduction, clinical accuracy, healthcare AI, prescribing optimisation, medical overuse, physician decision-making, digital health tools, health economics study, AI in telemedicine AI support cuts unnecessary treatments and boosts clinical accuracy, guiding clinicians to safer, cost-effective, evidence-based prescribing.