By: Adelle Atkinson (@AtkinsonAdelle) and Paul Brand (@PaulBrandZwolle)
A few months ago, we (Paul and Adelle) had the opportunity to do some international work on CBME together. With friends from Europe and North America, over coffee, drinks and dinners, we debated, in very animated ways at times, a variety of issues in medical education, and we critiqued each other’s solutions and ideas (and sometimes laughed at them). The surroundings and the company were great, and the conversations even better.
As part of our work during this time, we staged an educational debate. The motion was “a culture of assessment hurts a feedback culture and stands in the way of a growth mindset”. We (Paul and Adelle) were on different teams. Let’s just say, it was interesting.
Here is the Wicked Problem. Together with our learners, we believe that coaching and feedback by experienced clinical supervisors will support the acquisition of skills and development of competence needed to practice our craft. We speak about encouraging our learners to have a growth mindset, and embrace and internalize the feedback, we speak about it being formative and not summative. It all sounds like an educational utopia. But then, we turn around and, in many cases, use this formative data to make summative assessments and judgements. It’s all because we have to demonstrate to the regulatory authorities, the public and our patients, that we know we have graduated competent physicians, who will now practice independently.
This tension is well recognized in medical education literature 1,2. We need to recognize this tension, and intentionally design a system that acknowledges the issue, and the ultimate need for both forms of feedback to reach our goals. The system of programmatic assessment intends to resolve this 3. However, in our experience as clinical supervisors and program directors, many learners and supervisors still experience workplace-based assessments as tests, which puts them in exam mode, and reduces their willingness and capacity to learn from workplace experiences 2. So for programmatic assessment to be effective and useful in everyday clinical education practice, we need to train both our supervisors and our learners to separate coaching in the moment to support learning from summative assessment. This should be very clear right from the start of the PGME program and should be re-emphasized at every clinical encounter in which the learner receives feedback. We propose that the following message could help to serve that purpose.
Dear learners:
Let us tell you about our approach to support your learning.
Every day, you will receive feedback from the teachers that you work with, including us, both solicited and unsolicited. If you want feedback on a specific issue or learning point, just let us know, and we’ll be happy to provide it. We will also give you feedback when we feel it could be useful to your growth and development as a consultant in our field. This could take the form of little hints on how you can improve your skills, or a suggestion to try an alternative approach. It may also take on a more directive character, for example if we want to help you avoid pitfalls or prevent serious errors. All of this every day feedback is intended to help you learn, to help you grow as a professional, to help you reach your potential. Each of these days and encounters are not an exam, there will be no overall level grading associated with it. You can compare it to when you took driving lessons – you knew that your driving instructor was there to help you get better at driving a car, that their feedback was intended to help you learn and master the skills needed for safe driving. That’s exactly what we do with our everyday feedback – it’s like your driving lessons. Whether you want to record all of this feedback in your portfolio is up to you. If it is useful to you, go ahead and store it as mini-CExes or work-based assessments. Just bear in mind that we will provide more feedback than you want (or have) to record, but some of it will need to be recorded. But all of this, after all, is intended to help you learn.
Now, over the course of each stage of your training, especially around transitions, you will get a summative assessment on your overall performance and trajectory. Depending on where you are training, you may for example get this from your Program Director, or from an assessment/competence committee. This will be a group judgement, reflecting how we, as the team of clinical supervisors and educators, consider your overall growth and acquisition of competence to be over time, based on your level of training. This summative assessment should never come as a surprise to you. If we have concerns about your overall competence and growth over time, we will tell you at an early stage. We will sit down with you and share our observations, and then work on a plan with you to try and remedy where you might be struggling. The majority of you may never hear such concerns because you are doing just fine. But if we are worried or concerned, we commit to you that we will share it with you. That’s a promise. And, once again, you can count on every clinical supervisor to provide you with everyday feedback. We’re here to help you learn.
If you have any questions on this approach to support your learning, please let us know.
Our final thoughts:
Most residents are like self-raising flour. They just need a little bit of warmth and nurturing surrounding them, and then they will develop and grow just fine. Why burden this large majority of residents with a system of repeated formal assessments and excessive administrative burden that nobody is happy with? A strict system of formal assessment gives rise to perceiving these assessments as high stakes exams, with residents wanting to show off, staging performances, and cherry-picking those activities for assessments that they know they are good at. It encourages thoracic auto-percussion instead of useful reflection. We truly believe there is room for both: authentic formative feedback for growth and acquisition of competence, while satisfying the need for the necessary evidence provided in summative assessment programs.
We can do this.
About the authors:
Adelle Atkinson , MD, FRCPC is a Professor of Paediatrics at the University of Toronto, with a Staff Position in the Division of Immunology and Allergy at SickKids. She is also the current Director of Postgraduate Medical Education for the Department of Paediatrics and is a Clinician Educator with the Royal College of Physicians and Surgeons of Canada.
Paul Brand, MD, is a former paediatrician, now working in education & coaching of doctors
References:
- Watling CJ, Ginsburg S. Assessment, feedback, and the alchemy of learning. Med Educ 2019;53:76-85.
- Brand PLP, Jaarsma ADC, van der Veluten C. Driving lesson or driving test? A metaphor to help faculty separate feedback from assessment. Perspect Med Educ 2021;10:50-6.
- Torre D, Rice NE, Ryan A, et al. Ottawa 2020 consensus statements for programmatic assessment – 2. Implementation and practice. Med Teach 2021;43:1149-60.
The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The University of Ottawa. For more details on our site disclaimers, please see our ‘About’ page