Uncertainty vs. Certainty: Clinical Heuristics in the Era of AI

By: Nathan Baggett, MD

At its core, clinical education is a process of helping learners develop clinical reasoning to provide excellent patient care. Historically, clinical reasoning has been grounded in accurate history taking, thorough examinations, and careful diagnostic testing. It’s a skill that helps us navigate clinical problems, but that doesn’t mean we’re always able to settle on a clear answer in the moment. Uncertainty may persist as we pursue additional testing or seek further information, and skilled clinicians effectively communicate this uncertainty to patients. In short, sometimes we need to be able to say, “I don’t know.” 

For most clinicians, these clinical reasoning skills were developed in an era before the widespread use of artificial intelligence (AI). But today, learners are entering medical school and residencies as “AI natives” who more seamlessly integrate AI into their personal and professional lives. As the use of AI continues to skyrocket, the way we approach clinical problems will change. While AI algorithms may streamline some clinical reasoning problems, we must be aware of how we integrate the output of an AI model into our clinical practice.

Today, we can enter a patient’s symptoms into Claude, ChatGPT, or Gemini and receive a differential diagnosis in seconds (note: our patients are already starting to doing this before they come to us). However, what we often overlook is that these AI models can respond to us with a certainty that should raise our suspicions (see: AI hallucinates because it’s trained to fake answers it doesn’t know). While an AI may confidently respond to you (or your patient) with a specific diagnosis or treatment plan, it’s easy to misinterpret that immediate confidence as accuracy.

As an example, an AI model from PMCardio, called “Queen of Hearts,” is an AI Algorithm for detecting Occlusion Myocardial Infarction (OMI). This tool enables clinicians to take a photo of a 12-lead ECG and receive a report indicating whether the tracing meets the criteria for an OMI. While this tool has tremendous potential to improve the rapid diagnosis and revascularization of patients experiencing acute myocardial ischemia, what can be lost at the bedside is that this is ultimately an algorithm that outputs a number from 0 to 1, where 0 indicates no OMI and 1 indicates high confidence for an OMI. A threshold of 0.5 is used to determine whether the algorithm is positive for an OMI.

Most clinicians using this algorithm trust the “positive” or “negative” output at face value, but the creators note that an ECG could be reported as “negative” for an OMI with a score they would consider “extremely close to being positive. However, the end user never knows how close their patient’s results are to that cusp of positivity that might prompt activation of the cardiac catheterization lab.  All they see is positive/negative, black/white. The model’s own uncertainty is hidden.

So, what does this mean for how we integrate AI tools into our clinical reasoning and discussions with patients? In the past, we may have had a better understanding of the foundations of the data we use to guide our clinical decisions, which would help us better recognize the limits of our knowledge. Now, AI tools may generate output without disclosing the tool’s supporting evidence or its confidence in the answer. In the era of AI, what it means to disclose our uncertainty may change; instead, we may need to consider how we discuss the strengths and limitations of the tools we use with patients.

Questions to Consider

  1. How do you discuss uncertainty with your patients? Your trainees?
  2. What AI tools do you use? Do you trust these tools clinically?
  3. How do you know the validity of output from AI tools?
  4. What do patients think about using AI in clinical practice?
  5. Are your patients using AI before they see you?
  6. How do you discuss how you are using AI with your patients?

Author’s Note

Dr. Felix Ankel was recently elected to the Executive Committee of the American Board of Emergency Medicine (ABEM). He will be serving as President-Elect this year before becoming ABEM President in 2026. While fulfilling his leadership duties with ABEM, Drs. Nathan Baggett & Graci Gorman will be taking over authorship of his blog.

About the Authors

Nathan Baggett, MD is a clinical faculty member at HealthPartners in St. Paul, Minnesota. He previously completed a fellowship in medical education and is finishing his Master of Academic Medicine through the University of Southern California.

Graci Gorman, MD is the Program Director of the Medical Education Fellowship at Regions Hospital where she is also a clinical faculty member.

Photo curtesy of ISTOCK

The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa. For more details on our site disclaimers, please see our ‘About’ page