By: Christy Boscardin and Karen E. Hauer
Introduction
Artificial intelligence (AI) has the potential to transform health professions education. New AI tools such as ChatGPT, a form of generative AI, and new ways of using these new tools are continuously emerging. The literature and lay press offer many examples of how these tools can enhance the quality and efficiency of clinical practice. Clinicians, educators, and learners are already experimenting with AI tools to access medical information, generate clinical summaries, or personalize care. In this blog, we discuss three ways that AI can augment competency-based education (CBE) and be incorporated into clinical training.
The dynamic, rapidly changing environment for AI challenges health professions educators to be nimble and forward-thinking. Educators are training students and residents to be prepared to adapt to a clinical care and assessment environment where AI will be used frequently in new and exciting ways. This scenario is reminiscent of the quote from Wayne Gretzky: “Skate to where the puck is going to be, not where it has been.” We must train learners in a CBE program to be prepared for where clinical practice enhanced by AI is going, not just how clinical practice has traditionally occurred.
AI literacy in health professions education includes understanding the capabilities of AI, which encompasses tools to enhance healthcare, integrating AI tools into teaching, and ensuring inclusion, equity, and ethically responsible use of AI for societal good.1 Educators can draw on published competencies for AI in clinical practice: address basic knowledge of AI and its healthcare applications, societal and ethical implications, AI-enhanced clinical encounters, evidence-based use of AI tools, workflow for AI tools, and practice-based learning and improvement regarding AI.2
Below, we share examples of how AI may be incorporated into a CBE program using approaches focused on two exemplar competency domains: patient care (US)/medical expert (CanMEDS) and practice-based learning and improvement (PBLI, US)/scholar (CanMEDS) competencies.
Patient Care – clinical reasoning
Clinical reasoning is a hallmark physician responsibility. The physician engages in clinical reasoning by considering clinical information and generating a differential diagnosis and likely diagnosis. This cognitive process may occur automatically through pattern recognition, drawing on the physician’s breadth of knowledge and clinical experience, or through a systematic, analytic strategy of weighing possibilities and likelihoods.
As AI tools gain sophistication in sifting through and interpreting large amounts of clinical data, they can assist learners and physicians with clinical reasoning in multiple potential ways. AI tools that generate clinical notes or handoff summaries reduce physician workload. Based on a list of symptoms, vital signs, physical exam findings, or lab and imaging data, an AI tool could propose diagnostic considerations. In these contexts, the learner or physician must have expert skills in critical appraisal. Augmenting current competency expectations in using the literature and other evidence in clinical reasoning, future physicians will need competence in assessing the quality of AI-generated diagnostic considerations and clinical summaries and identifying any potential bias to avoid perpetuating stereotypes propagated by AI. Competency in recognizing incorrect information and contributors to that inaccuracy will be essential. For the successful incorporation of AI tools into the clinical reasoning process, learners and physicians will also need skills in supporting data security.
Formative feedback for PBLI
The Association for Graduate Medical Education (ACGME) has defined PBLI as the ability to continuously improve patient care based on constant self-evaluation and lifelong learning. With machine learning and the advent of large language models that have the ability to understand, summarize, and generate texts, including audio files, these new AI capabilities can transform how learners will receive just-in-time feedback on their performance and monitor their learning progression. AI tools that are trained on Pubmed, medical and clinical reference texts, patient notes, and clinical guidelines will be able to review draft notes created by learners and provide feedback and references or resources to consider.3 AI tools can also be used to review and summarize learners’ notes across rotations or clerkships to display learning progression in the ability to summarize patient information and generate differential diagnoses and management plans. Areas for learner improvement could be automatically fed forward to the next rotation to guide learners and supervising faculty. Optimizing these tools to help summarize performance data across multiple data sources, including learner-generated notes, oral presentations, narrative evaluator comments, and numerical ratings, will provide a more holistic picture of performance and help learners identify personal strengths and gaps.
Patient data for PBLI
One of the challenges for PBLI is how to optimize the patient’s electronic health record (EHR) data to enhance learning opportunities and feedback. The EHR can provide valuable information for learning and clinical feedback to trainees regarding patient outcomes, including diagnostic information, disease progression, and readmissions. As Lees et al.4 suggested, EHR data has the potential to allow trainees and programs to glean insights from the data, enabling practice-based learning through critical review and reflecting on their practice patterns and outcomes. With recent advancements in AI and health informatics,5,6 the summarization and real-time analytics from EHRs will likely improve the capacity of learners to monitor and learn from their patient data. Recent advancements in AI tools that optimize large language models and machine learning techniques have improved how EHR data can be pulled, summarized, and visualized to maximize interpretability and attribution to appropriate trainees and ensure meaningful feedback.
Conclusion
As generative AI tools are incorporated into clinical practice, they will redefine approaches to learning and assessment of core clinical competencies, including patient care-clinical reasoning and PBLI. These tools will provide instant access to a vast amount of clinical data, reference materials, guidelines, and in-time medical knowledge to help develop clinical reasoning and critical appraisal skills. With automated feedback from AI and self-appraisal by individual trainees after reviewing summary data from the EHR or other performance data, coaching by faculty to facilitate reflection will be critical for continued learning and improvement. AI does not replace physicians and other clinicians but will shift and transform physician work in new and exciting ways.
About the authors:
Christy Boscardin, Ph.D. is a professor in the Department of Medicine and Department of Anesthesia and Perioperative Care and the Director of Student Assessment in the School of Medicine. Dr. Boscardin is also the Director of the Medical Education Scholarship for the Department of Anesthesia. Her research interests include medical education with a focus on assessment, data science, implementation science, high-value care, and access to quality health care.
Karen Hauer, MD, PhD is Vice Dean for Education at UCSF School of Medicine. She led the design and implementation of the programmatic assessment system and coaching program in the medical school curriculum. She earned a PhD in medical education at the UCSF and the University of Utrecht doctoral program.
The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa . For more details on our site disclaimers, please see our ‘About’ page
