Site icon ICE Blog

Navigating Challenges in Programmatic Assessment for Competency-Based Postgraduate Education

By: Sören Huwendiek, MD, PhD, MME & Christoph Berendonk, MD, MME

This image was created using DALL-E, an AI image generator by OpenAI, specifically for this blog post.

Integrating assessment into competency-based postgraduate education presents a range of challenges. Programmatic assessment has emerged as a guiding framework for designing these systems, aiming to create a more holistic and meaningful evaluation process with a focus on learning and competency development (1,2). However, while some aspects of programmatic assessment are straightforward to implement, others pose significant difficulties. This blog explores these challenges and offers potential solutions to optimize assessment strategies.

The Essence of Programmatic Assessment

Programmatic assessment operates on the principle that every interaction, evaluation, or feedback moment contributes to a trainee’s overall assessment portfolio and learning. This approach ensures continuous feedback and improvement while reducing the stakes of individual assessments. However, implementing this principle effectively can be complex.

For instance, providing feedback at each assessment point seems intuitive and feasible. Feedback is widely recognized as essential for learning and development, offering trainees actionable insights to refine their competencies. Yet, the principle that every data point should be treated as a low-stakes assessment often raises concerns. Trainees and faculty alike may perceive this as an overwhelming volume of assessments, creating unnecessary stress and logistical challenges.

Challenges in Workplace-Based Assessments

One significant difficulty lies in the dual role of supervisors as both assessors and coaches. In workplace-based assessments (WBAs), supervisors are tasked with observing and evaluating trainees’ performance while simultaneously providing support and guidance. This dual role can create conflicts and motivate trainees to “play the game”, as trainees may feel hesitant to seek help or admit vulnerabilities when their supervisor is also their assessor. This seems also true for WBAs using entrustment scales (3).

Separating these roles is also not straightforward. Supervisors are often best placed to assess trainees because of their proximity to day-to-day clinical performance. However, balancing their responsibilities requires thoughtful strategies to mitigate potential conflicts and ensure assessments remain adequate and constructive (4).

Moreover, recent research has highlighted the continuing difficulty of WBA to capture trainees’ competencies in a reliable way. Ryan et al. (6) found that WBAs using “entrustment scales” do not generate sufficient reliability for summative entrustment decisions. This finding suggests that relying solely on WBAs for summative entrustment decisions may not fully meet the needs of programmatic assessment.

The Need for a Balanced Assessment Mix

A key consideration in programmatic assessment is the inclusion of a diverse range of assessment tools. While WBAs (short observations, longitudinal observations like multisource feedback) are invaluable for capturing real-world performance, they are not without limitations. Structured assessments, such as written applied knowledge exams and OSCEs can play a crucial role in complementing WBAs. These tools can:

Structured assessments also reduce the emphasis on the dual supervisor-assessor role. By placing greater weight on structured tools for summative decisions, the pressure on WBAs to serve as the primary assessment method is diminished.

Creating a Safe and Supportive Learning Environment

Psychological safety and supportive coaching are foundational to successful WBAs. A safe learning environment encourages trainees to embrace feedback and view assessments as opportunities for growth rather than punitive measures (5). However, fostering such an environment requires deliberate effort over time. Role modeling and faculty development are essential components, ensuring that educators are equipped to support trainees effectively.

Evidence Supporting a Balanced Approach

The recent consensus statement on programmatic assessment principles (1) highlights the value of information triangulation across data points as a core principle of programmatic assessment (principle 8). For example, determining whether a learner can be summatively entrusted to perform a clinical procedure might involve using exam components from an OSCE, along with WBAs and elements of a patient opinion questionnaire. In the consensus statement on implementation and practice of programmatic assessment (2), most programs reported the use of simulated assessments (e.g. OSCEs) and written assessments in addition to low stakes WBAs.

Gray et al. (7) demonstrated that a written applied knowledge exam (internal medicine specialist exam) had a stronger prognostic value for future patient outcomes of physicians compared to supervisor entrustments after a longer period of supervision. Further, Tamblyn and colleagues (8) found, that low communication scores in a national clinical skills exam (OSCE) were predictive for retained patient complaints.

These consensus statements and studies underscore the importance of integrating diverse assessment tools including structured assessments to capture a more comprehensive picture of trainee competence.

Embracing Both WBAs and Structured Assessments

Rather than viewing WBAs and structured assessments as competing methods, postgraduate training programs should embrace their complementary strengths. WBAs excel in capturing day-to-day clinical performance and fostering continuous feedback, while structured assessments can provide reliable data for high-stakes decisions. Together, these tools provide information-rich data that build a robust, balanced assessment framework that supports both learning and high stakes decisions.

Moving Forward

Addressing the challenges of programmatic assessment requires a multifaceted approach:

  1. Embrace the complementary strengths of WBAs and structured assessments for both learning and high stakes decisions.
  2. Foster role separation: Try to clarify and delineate the roles of supervisors and assessors to minimize conflicts.
  3. Invest in faculty development and role modelling to equip educators with the skills and tools needed to create psychologically safe learning environments and provide effective coaching.

By embracing these strategies and recognizing the value of such a balanced assessment approach, postgraduate education programs might better support the development of competent trainees and the successful treatment of their patients.

About the Author:

Sören Huwendiek, MD, PhD, MME, FAMEE, FKIPRIME, is an Associate Professor and Head of the Department of Assessment and Evaluation, Institute for Medical Education, University of Bern. His research focuses on innovative approaches in education and assessment to ultimately improve patient care.

Christoph Berendonk, MD, MME, is an Associate Professor and Head of Practical Assessment at the Department of Assessment and Evaluation, Institute for Medical Education, University of Bern. He has been involved in the implementation and improvement of WBA and structured practical examinations, both academically and practically, for the past 20 years.

References

  1. Heeneman S, de Jong LH, Dawson LJ, Wilkinson TJ, Ryan A, Tait GR, Rice N, Torre D, Freeman A, van der Vleuten CPM: Ottawa 2020 consensus statement for programmatic assessment – 1. Agreement on the principles. Med Teach. 2021 Oct;43(10):1139-1148. doi: 10.1080/0142159X.2021.1957088. Epub 2021 Aug 3. PMID: 34344274
  2. Torre D, Rice NE, Ryan A, Bok H, Dawson LJ, Bierer B, Wilkinson TJ, Tait GR, Laughlin T, Veerapen K, Heeneman S, Freeman A, van der Vleuten C: Ottawa 2020 consensus statements for programmatic assessment – 2. implementation and practice. Med Teach. 2021 Oct;43(10):1149-1160. doi: 10.1080/0142159X.2021.1956681. Epub 2021 Jul 30. PMID: 34330202
  3. Martin L, Sibbald M, Brandt Vegas D, Russell D, Govaerts M: The impact of entrustment assessments on feedback and learning: Trainee perspectives. Med Educ. 2020 Apr;54(4):328-336. doi: 10.1111/medu.14047. Epub 2020 Jan 24. PMID: 31840289
  4. Hatala R, Ellaway RH: Does authentic assessment undermine authentic learning? Adv Health Sci Educ Theory Pract. 2024 Sep;29(4):1067-1070.
  5. Brown W, Santhosh L, Stewart NH, Adamson R, Lee MM: The ABCs of Cultivating Psychological Safety for Clinical Learner Growth, J Grad Med Educ. 2024 Apr;16(2):124-127. doi: 10.4300/JGME-D-23-00589.1. Epub 2024 Apr 15.
  6. Ryan MS, Gielissen KA, Shin D, Perera RA, Gusic M , Ferenchick G, Ownby A, Cutrer WB, Obeso V, Santen SA: How well do workplace-based assessments support summative entrustment decisions? A multi-institutional generalisability study.  Med Educ. 2024 Jan 2. doi: 10.1111/medu.15291. Online ahead of print.
  7. Gray BM, Vandergrift JL, Stevens JP, Lipner RS, McDonald FS, Landon BE:  Associations of Internal Medicine Residency Milestone Ratings and Certification Examination Scores With Patient Outcomes. JAMA. 2024 Jul 23;332(4):300-309. doi: 10.1001/jama.2024.5268.
  8. Tamblyn R, Abrahamowicz M, Dauphinee D, Wenghofer E, Jacques A, Klass D, Smee S, Blackmore D, Winslade N, Girard N, Du Berger R, Bartman I, Buckeridge DL, Hanley JA: Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. JAMA. 2007 Sep 5;298(9):993-1001. doi: 10.1001/jama.298.9.993.

The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa . For more details on our site disclaimers, please see our ‘About’ page

Exit mobile version