Site icon ICE Blog

#KeyLIMEPodcast 73: Maybe it’s time for patient outcomes (!?) to be included in assessment of trainees

The KeyLIME podcast this week dips into the controversy over the appropriate metrics for assessing competence in learners.  David Cook has argued, appropriately, that patient outcomes (specifically in regards to program/curriculum evaluation) are heavily confounded by multiple factors. Thus, the signal is biased and potentially not trustworthy.

However, to ignore the patient (and their care) in determining the fitness for a learner for unsupervised practice seems crazy… when you write it in a post.


Check out the KeyLIME summary below or listen to the first KeyLIME podcast recorded before a live studio audience (just like a sitcom from the 70’s) here.

– Jonathan (@sherbino)

PS:  We welcome a guest host (and winner of the KeyLIME Live contest – Janet Bull)



KeyLIME Session 73 – Article under review:

Listen to the podcast

View/download the abstract here.

Kogan JR, Conforti LN, Iobst WF, Holmboe ES. Reconceptualizing Variable Rater Assessments as Both an Educational and Clinical Care Problem, Academic Medicine 2014, 89 (5): 721-7.

Jason Frank, Linda Snell, Janet Bull and Jonathan Sherbino at KeyLIME Live at ICRE




For over 15 years, the public has called upon the medical profession and medical education for improved accountability in health care quality, safety, and patient-centered care. At the same time, medical education has been perplexed as to how to reliably assess trainees. The conundrum of how to improve inter-rater reliability in assessment of medical trainees is currently under much debate within the medical education literature.

Kogan and colleagues re-conceptualize the problem of inter-rater reliability as being not just an educational problem laden with psychometric difficulties but also an essential problem for clinical care. The authors argue that the appropriate frame of reference for trainees should be decisions about levels of supervision that are appropriate for the delivery of safe, and high-quality patient care.

Type of paper

Key Points on the Methods
No methods.

Key Outcomes
A conceptual equation to underscore the point that what a trainee does is a function of competence combined with appropriate supervision and must equate to safe, effective and patient-centered care:

Trainee performance X Appropriate Supervision = Safe Effective, pt-ctrd Care

Key Conclusions
Kogan et al argue that we have missed an essential stakeholder in our assessments of medical trainees: the patient! The authors state that accurate observation means identifying the presence or absence of appropriate clinical skills and important patient outcomes. In fact, they go so far as to say that if an assessment tool “does not take into consideration the outcome of safe, effective, patient-centered care, the utility of that assessment is null and void.”
This is an important article to consider in light of others who might caution medical educators against over-focusing on patient-outcome oriented research or who may only have a focus exclusively on “effective learning.”

Salient points include:
• Direct observation of work-based, patient-encounters are key
• Rater variability in part comes from differing “frames of reference” i.e. what a trainee is compared to (normative or criterion based)
• The appropriate Frame of Reference (FOR) should be the patient and the care that the patient receives.
• Clinical supervisors need to determine whether a resident can be entrusted with the tasks or activities critical to the profession.
• Not all residents reach the same level of competence in all activities at exactly the same time, nor are all ready for indirect supervision after a fixed amount of time, say, 6 months for example.
• Faculty must be familiar with and trained in the requisite skills they are assessing, especially as faculty frequently use themselves as a frame of reference when assessing residents.

Spare Keys – other take home points for Clinician Educators
Linking it all together: The concept of patient care as the essential frame of reference might facilitate development of descriptive anchors for milestones related to entrustable professional activities (EPAs). EPAs as activities, presents a framework may make it conceptually easier to include competencies explicitly linked to patient care.

For example, consider “Collaborate as a member of in interprofessional team” (AAMC, 2013). This EPA (#9) includes among others, a system’s based practice competency: “Coordinate patient care within the health care system.” The entrustable behavior for a trainee entering residency who capable of practicing unsupervised might read:
Usually involves the patient and family in goal setting and care plans. A written care plan is usually provided and is complete and accurate with few errors of omission. Communicates critical information to other team members and consultants. Both anticipates and answers questions from patients and families. Provides accurate and required information for seamless transitions of care. Understands care coordination resources and accesses them to match patient/family needs. Advocates for patient access to community resources. (Derived from the milestones submitted to ACGME from PEDS, SURG, PSYCH, EM)

Association of American Medical Colleges (AAMC). (2013). Core entrustable professional activities for entering residency Retrieved December 30, 2013, from

References of Interest
• Association of American Medical Colleges (AAMC). (2013). Core entrustable professional activities for entering residency Retrieved December 30, 2013, from
• Choo, K. J., Arora, V. M., Barach, P., Johnson, J. K., & Farnan, J. M. (2014). How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis,  Journal of Hospital Medicine, 9(3), 169-175.
• Cook, D. A., & West, C. P. (2013). Perspective: Reconsidering the focus on “Outcomes research” In medical education: A cautionary note,  Academic Medicine, 88(2), 162-167.
• Donato, A. A. (2014). Direct observation of residents: A model for an assessment system, American Journal of Medicine, 127(5), 455-460.
• Kogan, J. R., Conforti, L. N., Iobst, W. F., & Holmboe, E. S. (2014). Reconceptualizing variable rater assessments as both an educational and clinical care problem,  Academic Medicine, 89(5), 721-727. doi: 10.1097/acm.0000000000000221
• van der Vleuten, C. P. M. (2014). When I say … Context specificity, Medical Education, 48(3), 234-235.
• Yeates, P., O’Neill, P., Mann, K., & Eva, K. (2013). Seeing the same thing differently mechanisms that contribute to assessor differences in directly-observed performance assessments, Adv Health Sci Educ Theory Pract. 2013 Aug;18(3):325-41. Epub 2012 May 12.

Access KeyLIME podcast archives here

Exit mobile version