Site icon ICE Blog

Deskilling and Automation Bias: A Cautionary Tale for Health Professions Educators

Artificial Intelligence in Healthcare - New AI Applications in Medicine - Digital Entity and Medical Icons - Innovative Technologies in the Medical Fields - Conceptual Illustration

By: Eric Warm MD, MACP

In an era when artificial intelligence (AI) promises unprecedented transformation in medicine, health professions educators find themselves at a pivotal crossroads.

While AI can improve diagnostic accuracy, personalize care, and streamline workflows, it also risks eroding the very expertise we aim to cultivate in future clinicians.

Two closely intertwined threats—deskilling and automation bias—have emerged as central challenges that demand urgent attention.

What Are Deskilling and Automation Bias?

Deskilling refers to the erosion of clinical judgment, procedural competence, or diagnostic reasoning due to over-reliance on automated systems. Tasks once performed with deep skill become passively monitored or entirely delegated to machines. Think of it as a cognitive and manual atrophy: skills fade not because they are unnecessary, but because they are no longer practiced.

Automation bias compounds this problem. It is the tendency to trust automated systems uncritically—even when they are wrong. This leads to two types of errors: errors of commission (acting on incorrect AI suggestions) and errors of omission (failing to act because the AI didn’t prompt action). As AI systems become more deeply embedded in electronic health records, diagnostics, and decision-making tools, these risks are no longer speculative—they are already visible in clinical training and care delivery.

How Deskilling Manifests in Health Professions Training

Health professions education is already showing signs of subtle, yet significant, erosion in core competencies due to automation. Consider these real-world scenarios:

These examples point to an urgent trend: technology is not just changing how we teach, but what gets learned.

Automation Bias in Practice

The dangers of automation bias are just as pressing. Studies show that clinicians—even experienced ones—are vulnerable to over-trusting AI systems:

Bias doesn’t stem from ignorance—it often arises from efficiency pressure, cognitive overload, or misplaced confidence that “the machine knows better.”

Lessons from Outside Healthcare: The Human-in-the-Loop Imperative

Mitigation begins with explicit human‑in‑the‑loop (HITL) design—systems that require clinicians to review, interpret, and when necessary, override AI recommendations. The idea is hardly novel. In November 2024, Presidents Biden and Xi publicly agreed that “the decision to use nuclear weapons should remain under human control and not be delegated to artificial intelligence,” echoing long‑standing U.S. and allied doctrine.8 If the world’s most destructive capability demands human judgment, surely we can insist on the same safeguard before letting a model manage insulin drips or certify a student’s competence.

A Utilitarian Counterpoint: Yes, AI Saves Lives

Despite these risks, AI offers substantial utilitarian benefits across patient outcomes, workforce satisfaction, and system efficiency. AI systems have demonstrated high accuracy in specific diagnostic tasks, such as melanoma detection and sepsis risk prediction, and have been associated with reduced hospital and ICU length of stay, lower in-hospital mortality, and improved management of chronic conditions.9-11

However, the same logic should apply to health professionals and AI as to parachutes and pilots. Just because autopilot can land the plane doesn’t mean we train pilots to be passengers. Medicine is no different. The utility of AI must be weighed not only against its successes but also against the risks when it fails and humans can no longer step in competently.

Actionable Strategies for Health Professions Educators

The question is not whether to adopt AI, but how to adopt it without hollowing out professional expertise. Here are key strategies:

What Implementation Could Look Like

The Bottom Line

AI is neither savior nor saboteur. It is a tool. But like any powerful tool, it reshapes the human roles around it. If we train clinicians merely to supervise machines, we risk not just deskilling them—but losing the heart of what it means to care.

As health professions educators, we must lead the way in designing an AI-enabled future that elevates, rather than erases, human expertise’s evolve effectively to meet future challenges without losing sight of timeless educational values.

References:

  1. Automation Bias in Mammography: The Impact of Artificial Intelligence BI-RADS Suggestions on Reader Performance. Dratsch T, Chen X, Rezazade Mehrizi M, et al. Radiology. 2023;307(4):e222176. doi:10.1148/radiol.222176.
  2. Automation Bias: Empirical Results Assessing Influencing Factors. Goddard K, Roudsari A, Wyatt JC. International Journal of Medical Informatics. 2014;83(5):368-75. doi:10.1016/j.ijmedinf.2014.01.001.
  3. Artificial Intelligence Suppression as a Strategy to Mitigate Artificial Intelligence Automation Bias. Wang DY, Ding J, Sun AL, et al. Journal of the American Medical Informatics Association : JAMIA. 2023;30(10):1684-1692. doi:10.1093/jamia/ocad118.
  4. AI in Pathology: What Could Possibly Go Wrong?. Nakagawa K, Moukheiber L, Celi LA, et al. Seminars in Diagnostic Pathology. 2023;40(2):100-108. doi:10.1053/j.semdp.2023.02.006.
  5. Promises and Perils of Artificial Intelligence in Neurosurgery. Panesar SS, Kliot M,Parrish R, et al. Neurosurgery. 2020;87(1):33-44. doi:10.1093/neuros/nyz471.
  6. Autopilots in the Operating Room: Safe Use of Automated Medical Technology. Ruskin KJ, Corvin C, Rice SC, Winter SR. Anesthesiology. 2020;133(3):653-665. doi:10.1097/ALN.0000000000003385.
  7. Decision-Making in Anesthesiology: Will Artificial Intelligence Make Intraoperative Care Safer?. Duran HT, Kingeter M, Reale C, Weinger MB, Salwei ME. Current Opinion in  Anaesthesiology. 2023;36(6):691-697. doi:10.1097/ACO.0000000000001318.
  8. https://www.reuters.com/world/biden-xi-agreed-that-humans-not-ai-should-control-nuclear-weapons-white-house-2024-11-16/?utm_source=chatgpt.com
  9. Transforming Healthcare: The Role of Artificial Intelligence. Aslani A, Pournik O, Abbasi SF, Arvanitis TN. Studies in Health Technology and Informatics. 2025;327:1363-1367. doi:10.3233/SHTI250625.
  10. Artificial Intelligence in U.S. Health Care Delivery.Sahni NR, Carrus B. The New England Journal of Medicine. 2023;389(4):348-358. doi:10.1056/NEJMra2204673.
  11. Benefits and Harms Associated With the Use of AI-related Algorithmic Decision-Making Systems by Healthcare Professionals: A Systematic Review. Wilhelm C, Steckelberg A, Rebitschek FG.The Lancet Regional Health. Europe. 2025;48:101145. doi:10.1016/j.lanepe.2024.101145.

The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa. For more details on our site disclaimers, please see our ‘About’ page

Exit mobile version