Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

The role of Programmatic assessment as a predictive tool to inform early identification and support students in difficulty.

Presentation
Edit Your Submission
Edit

Presentation Description

David Rojas1
Glendon Tait1 and Mahan Kulasegaram1
1 University of Toronto



Background
Programmatic assessment is an increasingly adopted paradigm that utilizes frequent low-stakes assessments to better track growth in knowledge and competence of medical students across time and informs coaching and high-stakes decisions(1). However, the ability of these assessment tools to identify students in difficulty early has not been fully explored. 


Summary of work
At the MD Program, University of Toronto, we studied the ability of “longitudinal (all 4 years) Progress Test (PT)" and “non-mandatory pre-clerkship assessment data” to determine if a student would fail the licensing exam. We conducted a sensitivity analysis(2) using the PT data from cohorts graduated in 2020 and 2021. We also developed a Machine Learning (ML) model(3) using quantitative and qualitative data from non-mandatory pre-clerkship assessments from the 2021 cohort. 


Results 
The sensitivity analysis(2) of longitudinal PT data showed a level of specificity (ability to classify individuals as would pass) to be above 92.5%, while the Negative Predictive Value (percentage of students classified as would pass, who actually passed the licensing exam) was above 99%. PT performance at the end of year 2 or the beginning of year 3 could help identify students at risk of failing the licensing exam. 

The ML model(3) developed using Pre-Clerkship assessment data showed an accuracy of 82.33 % and an ability to detect students in difficulty of 86.39 %. 


Discussion 
Our work supports the usability of PTs to inform students' learning process, while also showing that technologically supported methods (ML) could offer similar levels of accuracy by combining Quantitative and Qualitative data. 


Conclusions
We have shown that two different methodologies, using different assessment variables, could offer reliable analysis of student performance for early identification of students in difficulty. 


Take-home messages 
Organically generated data points and technologically supported solutions (ML) are reliable resources for the early identification of students in difficulty. 



References (maximum three) 

  1. Heeneman S, de Jong LH, Dawson L, Wilkinson TJ, Ryan A, Tait GR, Rice N, Torre D, Freeman A, van der Vleuten CPM. Ottawa 2020 consensus statement for programmatic assessment 1: agreement on principles (2021) Medical Teacher. Epub ahead of print Aug. 3. doi:10.1080/0142159X.2021.1957088. 

  2. Monaghan T.F., Rahman S.N., Agudelo C.W., Wein A.J., Lazar J.M., Everaert K., Dmochowski R.R. (2021). Foundational Statistical Principles in Medical Research: Sensitivity, Specificity, Positive Predictive Value, and Negative Predictive Value. Medicina; 57(5):503. https://doi.org/10.3390/medicina57050503 

  3. Anguita, D., Ghelardoni, L., Ghio, A., Oneto, L., & Ridella, S. (2012, April). The'K'in K-fold Cross Validation. In ESANN (pp. 441-446). 

Speakers