Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

When is programmatic assessment not programmatic assessment?

Oral Presentation
Edit Your Submission
Edit

Presentation Description

Shelley Ross1
Kent G. Hecker2 and Todd Milford3
1 University of Alberta
2 University of Calgary
3 University of Victoria




Background:
In 2005, van der Vleuten and Schuwirth first introduced the concept of programmatic assessment in health professions education (HPE). Since then, programmatic assessment has been a productive HPE research area, particularly the competency-based medical education (CBME) community. There are 4 basic assumptions of programmatic assessment: a longitudinal perspective on development of competence; regular meaningful feedback to learners; inclusion of workplace-based assessment; decision-making based on cumulative assessment data; and multiple points and methods of assessment data collection. However, our team noted an emerging trend in recent CBME literature suggesting diverging definitions of programmatic assessment. We systematically explored this phenomenon through a narrative review focused on how programmatic assessment is being described in CBME. 


Summary of work:
We conducted a literature search of English language peer-reviewed publications from 2005-2023. Initial search terms: “competency-based” and “assessment”; further searches added “programmatic assessment”. Abstracts were scanned to identify patterns in the data; data interpretation was through a modified nominal group consensus. 


Results:
The initial search resulted in 1584 exhibits; adding “programmatic assessment” reduced the results to 259 exhibits. We found relative stability over time in the definition of “programmatic assessment”, with the notable exception of a decrease in the variety of methods and assessment tools being used. 

Discussion:
 
Programmatic assessment continues to be a goal in CBME. Authors describe programs of assessment that generally reflect van der Vleuten and Schuwirth’s core assumptions. However, a trend was identified of decreased variety in methods of assessment, with an increasing number of articles describing programmatic assessment designed around a single assessment tool. 


Conclusions:
There is both promise and peril in the current state of programmatic assessment in CBME. 


Implications for future research: 
A full scoping review of this topic is needed to more thoroughly explore how authors are defining and describing programmatic assessment in CBME. 




References (maximum three) 

1. van der Vleuten CP, Schuwirth LW. Assessing professional competence: from methods to programmes. Med Educ. 2005; 39: 309–17. 

2. Bok HGJ, de Jong LH, O’Neill T, Hecker KG. Validity evidence for programmatic assessment in competency-based education. Perspect Med Educ. 2018; 7: 362-372. https://doi.org/10.1007/s40037-018-0481-2 

3. Ross S, Hauer K, Wycliffe-Jones K, Hall AK, Molgaard L, Richardson D, Oswald A, Bhanji F. Key considerations in planning and designing programmatic assessment in competency-based medical education. Med Teach. 2021; 43(7): 758-764. DOI: 10.1080/0142159X.2021.1925099 

Speakers