By Jonathan Sherbino (@sherbino)
This is the seventh post in a series on systematic education design. (Here are the previous topics: introduction, needs assessment, objectives, instructional methods.)
Today’s topic is assessment. (As an aside, assessment has traditionally been the term used to describe the appraisal of learners, while evaluation has been reserved for the appraisal of programs.) While assessment (like the other topics in this series) is too vast to cover in detail in a blog, I think there are some key ideas that should be mentioned.
Assessment can be defined as “the process of collecting, synthesizing and interpreting information to aid decision making.”1 Key to this process is:
- Mapping the assessment to the learning objective; and
- Mapping the assessment instrument to the sophistication of learning being tested.
Five key messages regarding assessment are:
(For more details, check out Educational Design: A CanMEDS Guide for Health Professions.)
Finally, the most important message is that there is no silver bullet. Efforts to refine or develop a single instrument that will provide a valid judgment of a learner’s global competence is misguided. Rather, it is the development of an assessment program that integrates multiple instruments (quantitative and qualitative) in a valid manner that permits a global judgement of competence.
In a future post, we’ll look at contemporary models of validity (Kane and Messick) and contrast them with traditional psychometric concepts of reliability and validity as a means of ensuring that judgments are “true.”
1. Airasian P. Classroom assessment. 3rd ed. New York (NY): McGraw-Hill; 1997.
Figure 1 adapted from The assessment of clinical skills / competence / performance
Figure 2 courtesy of Educational Design: a CanMEDS guide for the health professions.