By: Nathan Baggett, MD and Julie G Nyquist, PhD
It has become nearly impossible to open a medical education journal or speak with colleagues without the conversation quickly turning to artificial intelligence. While some have championed AI as the means to make education more personalized, adaptable, and responsive to our learners, this optimism is not universally shared. For skeptics, AI looms as an existential threat that may erode expertise and destabilize the education system.
Both perspectives are understandable. But these reactions to a rapidly emerging technology are not unique to the age of AI.
Artificial intelligence is just the latest disruption in a long and rapidly accelerating history of technological change that has continually reshaped how humans live, work, and learn. Long before circuits, networks, and algorithms transformed our lives, earlier innovations – such as spoken language (70,000-100,000 BCE), written language (~3,000 BCE), the printing press (1440), telephone and film (early 1900s) – changed the ways we created, stored, and shared knowledge.
Over the last 50 years, computers have evolved from room-sized colossi to personal desktop machines to pocket-sized devices that are ever-present today. In just the last five years, with the widespread adoption of artificial intelligence and large language models, the pace of innovation has continued to accelerate. AI is now embedded in the everyday technologies and workflows we rely on—our computers, internet search, and the handheld devices we carry with us.
Technological disruptions aren’t new. What is new today is the speed at which we must adapt to this latest disruptor.
For educators facing a technology that seems to evolve and grow its capabilities at breakneck speed, it’s easy to feel lost. Questions easily arise:
- What does it mean to teach and learn when answers can be rapidly generated for us, but meaning and responsibility cannot?
- How can we uncover and support developing competence when surface-level performance may appear to be expertise?
- What is our role as educators in designing learning environments—classroom and clinical—that foster motivation, judgment, and growth in the age of AI?
These are not questions about the technology itself. They are about learning and teaching. These are about how we adapt and apply educational frameworks in the era of AI.
Leaning on foundational frameworks can help bring meaning and clarity amid this AI-evoked disruption in education. To think about learning in the age of AI, we’ll consider dominant learning theories and how AI can expose weaknesses in our educational approaches.
First, the Dreyfus model (Dreyfus HL & Dreyfus SE,1980) illustrates how learners progress from novice to expert over time. Using the Dreyfus model helps the educator to understand a learner’s developmental stage. This frame allows the educator to craft and adapt educational experiences to their learner’s stage. At any stage of learning, the concern is that a learner’s confidence can outstrip their competence, especially for advanced beginners. As early learners look to AI, they risk imitating competence without requisite developmental growth.
Next is the Affective Domain (Krathwohl, Bloom, & Masia, 1964). This framework reminds us that learners grow not only in knowledge and skills, but also in their level of commitment, internal motivation, and felt responsibility for outcomes. When we place learners in fast-paced, high-demand clinical settings, their cognitive load—what is required of them simply to function and succeed—is very high. Now consider a learner whose motivation to learn is low (for example, during a mandated rotation outside their chosen field) or whose motivation to appear competent is high to secure honors or a strong letter of recommendation. In these situations, when motivation is low, learners may see AI as a convenient opportunity to offload their work – what we label as premature offloading.
Premature offloading is the use of AI as a shortcut to getting an answer or solution before the relevant competence has a chance to take root. The use of shortcuts by learners and clinicians is not new (think CliffsNotes, exam banks, ghost writing). The challenge is that the AI answers are so readily available and convincing in their apparent breadth and depth. With a single prompt, an AI can complete an entire assignment or write an H&P in seconds. This ease and access make it tempting for learners to turn to an AI when their motivation is low or when the task exceeds their cognitive load.
What this reveals is that traditional assessment tools, which relied on demonstrating a learner’s cognitive work, are insufficient to ensure that today’s learners have mastered the material. Early-stage learners (novices and advanced beginners) can masquerade as competent when leaning heavily on their AI assistants.
Given these concerns, how can we adapt our instructional design to the AI-disrupted educational landscape? This starts with us, the educators. In this age, we have a renewed obligation to actively explore our learners’ understanding, judgment, and depth of thinking. We can do this through probing questions that push a learner from reciting information to deeper reasoning. Whether or not the learner used an AI to answer an assignment, draft a differential, or write an H&P, probing questions help determine how deeply the learner has engaged with and internalized the material.
These types of probing questions are likely what you are already doing when working with small groups of learners in the clinical setting. Depending on the learner’s level, these questions might start with basic “why?” and “how?” questions to see if they can extend their thinking beyond a reporter level. As an emergency physician, I often find myself on shift asking learners questions like:
- What’s the next step if this doesn’t work?
- What are you most worried about?
- What could we miss with the workup you’re planning?
- Could it be [insert diagnosis]?
- If X happens, what will you do?
It’s not that these questions are new. It’s what clinical educators have done for years. But what AI demands is that we refocus our efforts on using our time with learners to ensure they build the judgment and decision-making skills necessary for clinical practice. These are skills our learners must develop during their training. However, rising AI use with the risk of premature offloading may slow expertise development at a time when we desperately need fully prepared clinicians ready for independent practice
Rather than banning or constraining the use of AI, starting with the affective domain helps us to nurture learners who are internally motivated to engage with the learning. Assignments and activities that can be easily completed by an AI tool are no longer sufficient. Assessment systems that focus solely on demonstrating knowledge fail to capture what is most essential. Instead, we need to focus on the processes of thinking. Instead of emphasizing the knowledge, we should focus on judgment and the development of mental models.
Artificial intelligence has arrived as the latest disruptor in education. What AI reveals most is how current practices fail to truly assess a learner’s deeper reasoning and development. When learners at any stage can imitate competence with an AI assistant, educators must rely on probing and exploratory questions to better understand what a learner is thinking rather than what they can recite. Educational frameworks that emphasize cognitive and developmental stages remain important, but AI demands that we are attentive to the affective domain that shapes how learners are internally motivated. In the age of AI, the knowledge is available in an instant, so learning can no longer be limited to the acquisition or recitation of knowledge. Instead, learning must be centered on the ways learners think, judge, and adapt.
This moment calls on medical educators to lead. AI has not created new questions about learning, judgment, or responsibility—but it has made our longstanding challenges more visible and more urgent. Through our own teaching and our guidance of junior preceptors, we can engage learners thoughtfully as we probe for clinical insight and diagnose premature offloading. We can then guide learners through reflection and graduated responsibility as they develop sound clinical judgment. Our task is not to resist AI, but to design learning environments—classroom and clinical—that incorporate AI use while ensuring learners do the hard work of becoming clinicians who can think, decide, and act responsibly when it matters most.
About the Authors
Julie G. Nyquist, PhD is a Professor of Clinical Medical Education at the Keck School of Medicine of USC and a long-time leader in health professions education. She is a co-founder and director of the Master of Academic Medicine (MACM) program and has chaired the Innovations in Medical Education (IME) Conference from 2014 through 2026. Across more than four decades, Dr. Nyquist has taught, mentored, and supported faculty, medical students, residents, fellows, and pharmacists, with a particular focus on learning environments, motivation, professionalism, assessment, and faculty development. She has delivered over 1,000 workshops and presentations and has served as an investigator on 14 federally funded education grants. Her recent work explores the role of artificial intelligence in health professions education, with attention to how AI can support learning, feedback, and reflection—while also posing risks to ownership, judgment, and professional identity formation. She approaches AI through the lens of educational purpose rather than technical novelty. Dr. Nyquist earned her PhD in Educational Psychology from Michigan State University.
Nathan Baggett, MD is an emergency physician and educator at HealthPartners in St. Paul, Minnesota and an Assistant Professor of Emergency Medicine at the University of Minnesota Medical School. As Director of Artificial Intelligence for his emergency department, he leads initiatives to integrate AI into clinical practice, resident education, and assessment systems. He completed a Medical Education Fellowship in 2025 and is currently completing a Master of Academic Medicine at the University of Southern California, where he explores how AI can transform clinical reasoning, feedback, and training in health professions education. Dr. Baggett earned his MD from the University of Wisconsin School of Medicine and Public Health in 2017.
Photo curtesy of ISTOCK
The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa. For more details on our site disclaimers, please see our ‘About’ page
