Site icon ICE Blog

#KEYLIMEPODCAST 362: I’ve giv’n her all she’s got captain, an’ I canna giver no more”: Breaking up with Kirk (Patrick)

In this weeks episode, the hosts discuss an article that takes a critical look at outcome evaluation, focusing specifically on the Kirkpatrick Model.

Learn more here.


KeyLIME Session 362

Listen to the podcast


Allen et. al., Evaluation in health professions education-Is measuring outcomes? Med Educ. 2022 Jan;56(1):127-136


Lara Varpio (@LaraVarpio)


  • There have been many calls in our literature to move BEYOND focusing on the outcomes of innovations (ie evaluations of “did it work”) that Kirkpatrick is based on (e.g., Haji, Morin, & Parker (2013) “Rethinking programme evaluation in HPE” Medical Education)
  • And yet, we continue to use outcome focused evaluation models.

Primer on Kirkpatrick Model:

  • The Kirkpatick model is an outcome model which looks at outcomes at 4 levels: participation; KSA’s (knowledge, skills, attitudes); behavior change; overall program results.
    • Level 1 is all about participant reactions (“Mikey likes it” reactions)
    • Level 2 measures the degree to which participants acquire KSAs
    • Level 3 measures the extent to which participants apply what they learned in their work
    • Level 4 is the degree to which target outcomes are achieved as a result of the intervention
  • NB:
    • Kirkpatrick did NOT see this as a hierarchical set of levels. Higher levels of the model ARE NOT indicators of higher quality interventions.
    • Kirkpatrick wrote that his model should be used to inform the development of the intervention AND the evaluation of the intervention. The model is not ONLY to be used at the evaluation stage.


  • My paper looks critically at outcome evaluation and focuses specifically on the Kirkpatrick Model

Key Points on the Methods

  • The authors systematically searched the four databases with no date limit
  • They screened titles and abstracts and included in their corpus the articles that referenced Kirkpatrick’s model explicitly AND was used in an HPE setting.
  • To make sure they didn’t miss any papers, they hand searched 6 leading HPE journals for the term Kirkpatrick and included any papers that used Kirkpatrick in the design of the research or intervention evaluation, data collection or analysis.
  • This process generated a corpus of 605 studies that used Kirkpatrick in some way in an HPE context

Key Outcomes

  • HPE uses Kirkpatrick’s model a lot
  • There have been several modified versions of the model offered that are HPE specific but they have largely been ignored
  • We’re only using it for evaluation not inform the entire intervention design and implementation.
  • The Kirkpatrick model has been adopted to do evaluations of A LOT of things.
    • MERSQI = the Medical Education Research Study Quality Instrument (to be used to assess the quality of medical education research)
    • The World Health Organization as a basis for their training evaluation guide
    • BEME = Best Evidence in Medical Education (to form part of the recommended reviewing coding sheet)
  • Somehow the Kirkpatrick model is the GOLD STANDARD for program evaluation
  • BUT it SHOULD NOT BE considered the GOLD STANDARD

Key Conclusions

  • The authors conclude: Without using these types of program evaluation approaches [i.e. the ones listed in Table 3, the more complex approaches], we cannot be sure that the interventions have been implemented as intended, nor can we explain why interventions have or have not been successful”

Access KeyLIME podcast archives here

The views and opinions expressed in this post and podcast episode are those of the host(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page

Exit mobile version