The Learning Curve & CBME:  Summing Up

Admin note: This blog is the sixth and final in a series! Below is a list with link to the previous posts

Part 1: Overture Click here to read
Part 2: The early phases of learningClick here to read
Part 3: The nonlinearity of learning Click here to read
Part 4: Standard setting and the learning curveClick here to read
Part 5: Inter-Individual variability of learning curves Click here to read

—————————-

By: Martin Pusic (@mpusic) and Kathy Boutis (@ImageSim)

In this sixth and final blogpost in our series on CBME and the learning curve, we summarize our learnings by considering how the learning curve representation intersects with the Core Components of CBME.  These components are considered the sine qua non of a CBME approach (van Melle 20191). If the learning curve framework were to contradict them, we would have a considerable problem on our hands. Thankfully (in fact, logically) there are no worries.  In the Table we have listed the Core Components and how they are supported by learning curve evidence.  Let’s consider each in turn. 

Specify Outcome Competencies

CBME requires an outcomes-based competency framework which is exactly what the y-axis of the learning represents (see first blog post). If we have a discriminant, reliable and valid y-axis representing a particular competency construct, we can bring forth a detailed description of the learning journey, one of the key features of a competency-based approach. Competency standards, as discussed in the fourth blog post, are externally derived consensus-based targets for CBME and may intersect the learning curve at any point.    

However, it gets interesting when we cannot plot at least an overall group learning curve. When one learner doesn’t demonstrate a learning curve, then we likely have one learner who needs remediation; but what if in examining a group of learners we can’t demonstrate at lease a group-level learning curve? Now we have a problem with the educational design, one that bears investigation in terms of any number of issues: metrics, effort, instruction….   

Figure 1:  The Core Components of CBME as they are represented on a learning curve

Two learning curves from different learners show different patterns of learning. These inter-individual differences are a key rationale for CBME.  The five core components of CBME are described as they relate to these example learning curves. 

Sequenced Progression

CBME requires progressive sequencing of learning. The learning curve shows, through its successive phases, how the natural progression of learning proceeds, allowing the instructor to avoid fighting mother nature. For example, we argued in the second blog post that attention paid in the latent phase is rewarded with compounding downstream.  The non-linear nature of learning (third blog post) means that in the initial accelerating phase of learning the educator would do well to just stay out of the way. However, when the learner hits the second inflection point, the deceleration of learning has to be contextualized with respect to societal expectations of competency. In the fourth blog post we unpacked how standard setting can variably interact with the learning curve. Finally in terms of learning progression, expert asymptotic learning is natural if the person has been trained for it; it doesn’t feel natural if a flat “I am done learning” plateau was expected. 

Tailored Learning Experiences

CBME requires that instruction be based on the constructivist notion that all learners are different and each constructs their knowledge differently. In the fifth blog post, we demonstrated the tremendous variability in both where learners start (the y-intercept) and their learning path (shape and slope of individual learning curves). This variability is nicely shown when plotting individual learning curves and statistical analyses can provide insights as to how much inter-individual variability there is across a group of learners, while there may be tremendous inter-individual variability this tends to diminish with learning.  This reflects the truism that while there are unlimited ways of being wrong, most people who are doing something well do it in relatively the same way. 

Competency-focused instruction

CBME requires learning experiences tailored to the competencies. Here the x-axis comes into play: how is time or repetition being used in service of competency achievement? We would argue that learning curve is a natural representation of the degree to which educators have been successful in using time for learning.  In essence, educators should be accountable for the slope of the learning curve and maximizing it (steep = good!). Having the learner gradually take over responsibility for the learning curve slope is the ultimate, sustainable competency. 

We point out that competency is a multi-faceted thing. In the figure below we show the averaged learning curve of residents learning radiograph interpretation in terms of both their accuracy (proportion of correct diagnosis) and their fluency (the speed with which they complete the task). One could argue that the resident is not fully competent until they are both accurate AND fluid. However, as pointed out in the fourth blog post, the competency standard is open for social consensus. 

Figure 2.  Multi-dimensional learning curves

In the averaged group learning curves shown above, two different aspects of six residents learning to interpret radiographs with immediate feedback are shown. In the top curve, accuracy improves until the residents have completed about 120 cases. Time per case on the other hand doesn’t really plateau until the residents have done many more cases, ~225. One could argue that for the cases between 120 to 225 the residents are solidifying their skill and building their fluidity.  Reproduced with permission from Pusic 20152.

Programmatic assessment

Finally, CBME requires programmatic assessment. Learning curves won’t work for every competency or for every type of assessment.  But we would argue that they could work for many more situations than we use them for now. If we can’t detect a learning curve the conclusion isn’t that it isn’t there – it is (Pusic 20183). It may just mean that we can’t get the signal to noise ratio right to find it (see Figure 1 in the fifth blog post on inter-individual variability). 

In a CBME program of assessment, having some skills where an explicit learning curve is tracked and followed — for learning efficiency, attainment and reliability — will provide a rich description of a group’s competency journey and will anchor a CBME program in the organic miracle that is learning by humans: the reliable association of targeted effort with proportionate achievement. 

The Final Summary

Whew!  It took six blog posts to demonstrate all the ways learning curve theory underpins CBME.  Let’s summarize the key messages:

  • Learning is non-linear with multiple inflection points
  • A lot is happening in the latent phase where a learner is getting organized to learn
  • Expert learning is asymptotic learning – it never stops
  • Competency standards are independent of the learning curve that defines how hard something is to learn.  That’s as it should be; learning curves are for learners while competency standards are for patients. 
  • No one’s learning curve matches anyone else’s, and certainly not the theoretical one; that doesn’t negate the theory, it means that learning is harder than our old conceptualizations would have us think it is. 
  • The learning curve framework aligns well with the core components of CBME 

How effort is rewarded with improvement in skill is the fundamental relationship in education.  The learning curve describes this relationship both empirically and in (scientific) theory.  It is a solid basis for understanding key aspects of CBME. 

About the authors:
Martin Pusic, MD PhD is Associate Professor of Pediatrics and Emergency Medicine at Harvard Medical School, Senior Associate Faculty at Boston Children’s Hospital and Scholar-In-Residence at the Brigham Education Institute. 
Kathy Boutis, MD FRCPC MSc is Staff Emergency Physician, Senior Associate Scientist, Research Institute at The Hospital for Sick Children and Professor of Pediatrics at the University of Toronto.

References

1. Van Melle E, JR Frank, ES Holmboe, D Dagnone, D Stockley, J Sherbino, International Competency-based Medical Education Collaborators. A core components framework for evaluating implementation of competency-based medical education programs. Academic Medicine. 2019;94(7):1002-9. 

2. Pusic MV, K Boutis, R Hatala, DA Cook. Learning curves in health professions education. Academic Medicine. 2015;90(8):1034-42. 

3. Pusic MV, K Boutis, WC McGaghie. Role of Scientific Theory in Simulation Education Research. Simul Healthc. 2018;13(3S Suppl 1) :S7-S14

The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page