By:Daniel Cabrera, Felix Ankel
We live in exponential times. Our relationship with knowledge, professional identity, and structure is rapidly changing. How will we manage knowledge? How will the role of the clinician change? How will we educate in this new environment? How will our health systems evolve? This is the first of a three-part series focusing on AI (microwaves), human capital (chefs), and the learning organization (restaurants).
Part 1: Microwaves
Learning about electricity before using a microwave oven
Before AI tools are properly identified and their use described, it is of paramount importance to have the skills to appropriately manage and collaborate with these advanced technologies. Think of these skills as training wheels for advanced AI-based tools, picture this as yourself learning how to use a microwave for the first time.
The UK Digital Service AI principles provide a comprehensive governance approach, ensuring that AI implementations are lawful, ethical, secure, and transparent. For example, AI-assisted grading systems must be explainable and fair, adhering to principles of accountability and human oversight. Likewise, AI-driven virtual patients must be utilized responsibly and ethically, with safeguards in place to prevent bias in case presentations.
This framework emphasizes the necessity for meaningful human control, ensuring that educators supervise AI outputs instead of allowing them to operate autonomously in high-stakes learning environments. By aligning AI adoption in medical education with these principles and frameworks, institutions can harness AI’s benefits while preserving educational integrity and ethical responsibility.
Governing principles
1. Know What AI Is and Its Limitations
AI encompasses machine learning, natural language processing, and generative models that assist in teaching, assessment, and decision-making. However, AI lacks reasoning, contextual awareness, and guaranteed accuracy. Medical educators must be aware of AI’s strengths and weaknesses to use it effectively in training future healthcare professionals.
2. Use AI Lawfully and Safely
AI applications in medical education must comply with data protection laws, intellectual property rights, and ethical considerations. AI tools should be used to support—not replace—human judgment. Medical educators should be aware of potential biases in AI models and work with legal and compliance teams when adopting AI-based tools.
3. Use AI Responsibly and Ethically
Ethical AI use ensures inclusivity, fairness, and transparency. AI-driven assessments and teaching tools should undergo rigorous testing to prevent reinforcing biases in medical training. The UK AI Playbook emphasizes ensuring AI positively impacts students, avoids discrimination, and maintains human oversight.
4. Keep Control: Human-in-the-Loop Approach
AI should assist rather than replace human expertise. Educators should maintain oversight over AI-driven tools, ensuring meaningful human intervention in AI-assisted decision-making processes. This is crucial for high-stakes applications, such as AI-driven evaluations of clinical reasoning. Think of AI as part of an assessment infrastructure (supports the process), rather than a superstructure (drives the process).
5. Identify the Right Tool for the Job
Not all educational challenges require AI. Educators should avoid implementing AI for AI’s sake and instead identify areas where AI can genuinely enhance learning outcomes. For example, AI can optimize surgical training through augmented reality simulations, but it may not be necessary for basic knowledge assessments.
6. Stay Literate, Fluent, and Skilled in AI Use
Medical educators must continuously update their AI literacy to ensure they are equipped to guide students in using AI tools effectively. Training programs should include AI ethics, its limitations, and its practical applications in medical education.
Now, learn how to cook food in a microwave oven
Adopting new artificial intelligence (AI) tools in medical education should follow a structured approach. Like previous technologies, AI has unique aspects, given its breadth, complexity, and rapid change.
A useful model, described by Gordon et al. considers the type of general-use AI technology, use intent (SAMR = substitution, augmentation, modification, or redefinition), use case, educational focus, and specific AI technique.
For example, a chatbot to analyze learners’ performance after an OSCE might be introduced to assist in providing feedback. Regarding the SAMR framework, this tool could initially be used at the substitution level, replacing human debriefing with automated feedback. However, as AI evolves, it could transition into augmentation by providing personalized feedback on diagnostic reasoning. The use case here is assessment, while the educational focus is on improving students’ clinical reasoning through immediate AI-powered feedback. The AI technique used could be natural language processing (NLP) to analyze written case responses, ensuring structured and consistent evaluations.
In contrast, a more advanced AI system, such as a digital twin patient powered by generative AI and deep learning, would align more with modification or redefinition in the SAMR framework, as it fundamentally transforms how students interact with clinical scenarios. Instead of traditional case-based learning, students could engage in interactive, AI-driven meta-patient encounters that adapt dynamically to their responses. The use case here is clinical simulation, with an educational focus on decision-making, diagnostic skills, and patient communication. The specific technology might include reinforcement learning and digital twinning to create adaptive patient responses, ensuring that each student encounters a personalized learning experience that mimics real-world complexity.
Implementing artificial intelligence in assessment is much like introducing a microwave into the kitchen; it’s powerful, transformative, and not without risk. Just as electricity revolutionized cooking with the microwave oven, AI has the potential to radically improve the creation, delivery, and assessment of health professions education through AI tools like LLM-based simulation debriefers. But owning a microwave doesn’t automatically make someone a master chef; you still need to know not to put a fork inside, or you’ll get a house fire instead of popcorn. Similarly, educators and IT professionals must understand AI’s strengths, limits, and biases to use it safely and effectively and ensure the educational experience is appropriate, not dangerous. Both cases’ goals are the same: harness a powerful technology to produce something useful, reliable, and safe, whether it’s hot food or high-quality, human-centered feedback.
Questions to consider
- Are you using AI in assessment?
- Can you verbalize its limitations?
- Are you aware of the legal implications of using AI, especially for high-stakes assessments?
- What ethical framework are you using?
- What human safeguards are you using?
- What is your decision architecture and governance around AI and assessment?
- What is your faculty development plan for educators using AI in assessment?
Look for part II of this series: Be the chef, not the microwave.
References/Further Reading
Launching the Artificial Intelligence Playbook for the UK Government. https://www.gov.uk/government/publications/ai-playbook-for-the-uk-governmentcare.
Image from IStock
The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa. For more details on our site disclaimers, please see our ‘About’ page
