Site icon ICE Blog

Let’s talk CBME in the Surgical Learning Environment: How it’s meant to go, and how it’s going

By: Mary Thompson and Jena Hall (@JenaHall1)

The transition of postgraduate medical education (PGME) from time-based models to competency-based models (CBME) is one that affects trainees internationally. The rationale for CBME implementation is clear: there is a need to ensure PGME curricula are keeping pace with an evolving medical field, while enhancing assessment systems to both benefit trainees and increase the societal accountability of our training programs1. Although some of these expectations have been realized, many have not. Implementation has come with challenges that trainees across specialties’ continue to face including administrative burden, performance anxiety, and inconsistency in assessments, which has led to system gamification2. This has similarly been our experience. 

Leveraging our practical experiences with CBME, in this post, we consider some of the ‘real world’ challenges of CBME within the surgical learning environment3.  

Everyone Needs a Break

 The OR environment and schedule is one that appears to cater perfectly to CBME: there is direct observation of a trainee with built-in time between cases to provide timely feedback on their performance. 

However, the OR is an environment with high cognitive demands. As such, trainees perceive that asking for formal assessment completion between cases, during a window of cognitive rest, is unwelcome. It is also our perception that requesting assessment completion between cases can negatively interfere with faculty-trainee relationship development, as this is usually the time to ‘grab coffee’, chat about life outside of work, and provide mentorship.

Psychological Safety in Asking for Constructive Feedback 

            The CBME paradigm totes a culture of frequent low stakes assessments that, when combined, make up the bigger picture of a trainee’s performance. This demands a culture in which constructive feedback is normalized, there is trainee and supervisor psychological safety, and assessments document not only ‘achieved’ skills, but skill progression.

            Realistically, the expectation to achieve a high volume of EPAs within each stage of training generates added pressure on the trainee and staff. Trainees ask for assessments retrospectively when they believe they performed well because the administrative burden of asking for assessments more often than that is too great. This practice virtually eliminates opportunity for documentation of constructive feedback and makes every assessment high stakes. Learner expectation during feedback opportunities is “EPA achievement” – fostering a culture of assessment for documentation, rather than assessment for learning

Additionally, constructive feedback conversations require psychological safety to reduce learner vulnerability. As one is often sharing space in the OR with interprofessional colleagues, opportunities for this ideal feedback environment are limited. This further reduces a learner’s likelihood of asking for feedback when they have not performed well.

Bring on the Bias

Although EPA-based assessments are designed to objectively capture performance, evidence suggests that they are, in fact, vulnerable to subjectivity. This is demonstrated by differences between male and female trainees’ evaluations. For example, Roshan et al.4 demonstrate that male staff are more likely to provide a significantly more positive review of a female resident compared to a male resident performing the same task. Conversely, another study by Padilla et al.5 demonstrates that although faculty assessments do not differ across genders, female general surgery resident self-assessments are lower than those of their male counterparts. 

Ask learners and supervisors alike, and they will tell you that the individual parts are not a reliable method to interpret the whole. In practice, the inherently granular nature of EPA-based assessments limits their ability to capture the intangible aspects of what makes a “good surgeon” including judgment, courage, and insight. One might argue that there is space on the assessments for provision for narrative comments, however, it is our experience that these sections are rarely filled with helpful content, if anything at all. So, how else can we capture that ‘gestalt’ and subjective assessment that is so important to communicate to learners?

A Way Forward

While CBME holds promise for surgical trainees, striking a balance is essential to harness its full potential. 

  1. Prioritize education. Simply providing residents with a list of rotation-mapped EPAs is not sufficient. For CBME to reach its full potential, residents AND supervisors must be adequately oriented and trained in the CBME framework. This should occur periodically, as with any CME topic. A stronger grasp of CBME by all, ensures consistency and credibility of the assessment process. 
  2. De-emphasize volume, re-emphasize value. To mitigate assessment fatigue, assessment schedules should de-emphasize volume and instead emphasize the value and quality of assessments. This shift in focus would not only alleviate some of the administrative burden associated with CBME, but also allow trainees to better understand and respond to the more focused feedback. 
  3. Renew the focus on intrinsic motivators.  CBME fosters extrinsic motivation, focusing on supervised learning and EPA acquisition. This translates to the maladaptive perspective that there is “no point” to unsupervised practice given there is no opportunity for EPA completion. Surgical skill, however, is not only acquired in the operating room, but also through surgical simulation activities, home laparoscopic training models, and more. These are all aspects that are not captured by CBME, but vitally important in a trainee’s surgical development and quickly losing traction under this new paradigm. 
  4. Accountability for all.  CBME hinges on active and equal participation from residents and supervisors, however, currently only residents are held accountable. At our institution, residents are expected to achieve an average of three EPAs per week. Quarterly reviews ensure residents are progressing as expected. Supervisors are not held to the same standard: they do not have expectations nor repercussions for assessment completion. We believe that implementation of comparable expectations for supervisors will improve resident psychological safety around asking for assessment completion and ease some of the resident cognitive and administrative burden. 

In conclusion, the transition of PGME from a time-based to competency-based model is a necessary shift. While it holds promise, its implementation requires ongoing refinement to address the specific challenges encountered. By prioritizing education, de-emphasizing volume, renewing intrinsic motivation, and promoting accountability, we can work towards a more effective and equitable CBME system in surgical training and beyond.

About the authors:

Dr. Mary Thompson MD, is a third year Obstetrics and Gynecology Resident at the University of Calgary. She is passionate about optimizing the implementation of CBME within the clinical learning environment after experiencing first-hand the benefits and drawbacks.  

Dr. Jena Hall MD, MEd, FRCSC, is a Urogynecologist at the University of Calgary. Her interest in medical education is rooted in cognitive theories of learning and optimization of surgical training within the OR learning environment.


  1. Royal College of Physicians and Surgeons of Canada. Competence by design: The rationale for change. Retrieved from
  2. Branfield Day, L., Colbourne, T., Ng, A., Rizzuti, F., Zhou, L., Mungroo, R., & McDougall, A. (2023). A qualitative study of Canadian resident experiences with Competency-Based Medical Education. Canadian Medical Education Journal, 14(2), 40-50.
  3. Nordquist, J., Hall, J., Caverzagie, K., Snell, L., Chan, M. K., Thoma, B., … & Philibert, I. (2019). The clinical learning environment. Medical Teacher, 41(4), 366-372.
  4. Roshan, A., Farooq, A., Acai, A., Wagner, N., Sonnadara, R. R., Scott, T. M., & Karimuddin, A. A. (2022). The effect of gender dyads on the quality of narrative assessments of general surgery trainees. The American Journal of Surgery, 224(1), 179-184.
  5. Padilla, E. P., Stahl, C. C., Jung, S. A., Rosser, A. A., Schwartz, P. B., Aiken, T., … & Minter, R. M. (2022). Gender differences in entrustable professional activity evaluations of general surgery residents. Annals of surgery, 275(2), 222-229.

The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa . For more details on our site disclaimers, please see our ‘About’ page

Exit mobile version