1. Bernstein and colleagues evaluated potential jurors’ perception of culpability of a radiologist in a hypothetical false-negative diagnosis malpractice case.
2. An increased perception of liability was found if the AI found a positive result. Liability perception was lower if data regarding AI precision was presented.
Evidence Rating Level: 1 (Excellent)
Study Rundown: AI advancements can lead to early disease protection and improved diagnostic accuracy. However, radiologists may be more vulnerable to malpractice allegations when they disagree with AI in a false-negative case. Bernstein and colleagues recruited participants eligible for jury duty in the US to evaluate the culpability of a radiologist in two malpractice clinical scenarios. Each participant was randomly assigned to one of five conditions representing various levels of AI use and disclosure related to the case. The study found that participants were equally likely to side with the radiologist if no AI was used in the scenario compared to if AI agreed with a false negative. The participants were more likely to side with the plaintiff if the AI disagreed with the radiologist’s false negative diagnosis. Additionally, the participants were less likely to side with the plaintiff if the scenario included the AI’s false omission rate (FOR) and false discovery rate (FDR). This study empirically demonstrated how potential jurors react to a potential malpractice case and how to mitigate radiologist penalization by presenting data on AI precision.
Click here to read the study in NEJM AI
Relevant Reading: AI in radiology: Legal responsibilities and the car paradox
In-Depth [randomized controlled trial]: 1,334 participants who met the following criteria were recruited: adults 18-89 years of age living in the US and who had English as a primary language. Participants were randomized to two clinical scenarios that involved a radiologist who failed to identify an abnormality and was sued by the patient or their family (plaintiffs). The participants indicated whether the radiologist had met their duty of care by answering a single yes/no question: “Did the radiologist meet their duty of care to the patient?” A response of “no” meant siding with the plaintiff, finding the radiologist liable; “yes” meant siding with the defendant (the radiologist). Within the two scenarios – one related to a brain bleed and the other related to lung cancer, participants were further randomized to five experimental conditions: no AI (control), AI finds pathology (AI disagree), AI found no pathology (AI agree), AI disagree and the AI has an FDR of 50% (AI disagree + FDR), and AI agree and the AI has an FOR of 1% (AI agree + FOR). In both scenarios, there was no significant difference in the proportion of “no” between the no AI and AI agree conditions (brain bleed: 50.0% vs 56.3%, p = 0.33; lung cancer: 63.5% vs 65.2%, p = 0.77). Participants sided with the plaintiffs more often in the AI disagreed condition versus the AI agreed condition (brain bleed: 72.9% vs 50.0%, p = 0.001; lung cancer: 78.7% vs 65.2%, p = 0.04). Meanwhile, when the participants were given the AI’s FOR, only 34.0% and 56.4% of the participants sided with the plaintiffs in the brain bleed and lung cancer scenarios, respectively. The study was limited by its ecological validity, as participants received fewer details than real jurors for a real case. Overall, this study provided empirical evidence on how potential jurors react to a malpractice case involving AI and how the presentation of AI data affects legal outcomes.
Image: PD
©2025 2 Minute Medicine, Inc. All rights reserved. No works may be reproduced without expressed written consent from 2 Minute Medicine, Inc. Inquire about licensing here. No article should be construed as medical advice and is not intended as such by the authors or by 2 Minute Medicine, Inc.