Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

Assessment for selection

Oral Presentation

Oral Presentation

4:00 pm

26 February 2024

M211

Session Program

Debra Sibbald1
Andrea Sweezey1
1 Leslie Dan Faculty of Pharmacy, University of Toronto 



Background 
Assessment of intrinsic skills is important in health professions’ admission processes. Admission criteria for U of T Pharmacy changed in 2019, 2020 and 2021 to address pandemic constraints and broaden access. Associations with each cohort of admission applicant data, and impact on early (1st and 2nd) year-end performance are reported. 


Summary Of Work 
Admissions statistics for 3 cohorts (n=720) with different admission criteria were analyzed for applicant variables and year end correlations. Data, including GPA, PCAT score, situational judgment tests (MMI, CASPer and a mixed methods online assessment [MMOA]), were compared to year end performance. A linear regression model examined admission metrics for prediction of in-program grades. 


Summary Of Results 
In each cohort, only entry GPA significantly predicted year end results (1st year: 2019: r2 = 0.106, p <0.001; 2020: r2 = 0.113, p < 0.001; 2021: r2 = 0.228, p < 0.001 ); 2nd year: 2019: r2 = 0.106, p <0.001; 2020: r2 = 0.261, p < 0.001; 2021: r2 = 0.055, p < 0.001). No other significant predictors were found. MMI (2019) and CASPer (2020) contributed to regression model results. Widening access by decreasing required courses, eliminating PCAT, and creating an online assessment resulted in increased applicant numbers (3.4 times increase from 2020 vs 2021), and higher minimum and average GPAs on entry. 


Discussion And Conclusion 
Results from three cohorts with varied admission criteria suggest GPA is the strongest predictor of early year in-program metrics. Although not predictive, situational judgment tests including online versions, provide useful screening criteria to differentiate diverse applicants. Predictors of program metrics in senior years (3rd and 4thyear) are currently being examined for impact and relevant differences. 


Take Home Messages 
Changing admission criteria positively influenced admission applications with no apparent negative impact on year-end performance. 


References (maximum three) 

  1. Greatrix R, Nicholson S, Anderson S. Does the UKCAT predict performance in medical and dental school? A systematic review. BMJ Open 2021;11:e040128. doi:10.1136/ bmjopen-2020-040128 

  2. Barber, C., Burgess, R., Mountjoy, M., Whyte, R., Vanstone, M., & Grierson, L. (2022). Associations between admissions factors and the need for remediation. Advances in Health Sciences Education, 27(2), 475-489. 

  3. Cameron, A. J., MacKeigan, L. D., Mitsakakis, N., & Pugsley, J. A. (2017). Multiple mini‐interview predictive validity for performance on a pharmacy licensing examination. Medical Education, 51(4), 379-389. 

  4. Dore, K. L., Reiter, H. I., Kreuger, S., & Norman, G. R. (2017). CASPer, an online pre- interview screen for personal/professional characteristics: Prediction of national licensure scores. Advances in Health Sciences Education, 22(2), 327-336. https://doi.org/10.1007/s10459-016-9739-9 

Kylie Fitzgerald1
Brett Vaughan1 and Jane Fitzpatrick1
1 University of Melbourne 



Background
Evaluating selection methods informs best practice in specialist medical selection. Applicants undertake a Multiple-Mini-Interview (MMI) for trainee selection at the Australasian College of Sport and Exercise Physicians (ACSEP). The ACSEP MMI ran face-to-face in 2019, then online from 2020 due to COVID restrictions and retained to increase equity of access and mitigate costs for Australasian candidates. We report the MMI reliability for 2019-2021. 


Summary
A prospective observational design was used across three candidate cohorts. Station themes were aligned with the ACSEP curriculum domains. All stations were developed by education and content experts together and reviewed annually, based on evaluation data of the previous year. Interviewers participated in general MMI training in 2019, then online training for their specific MMI station in 2020-21. Generalisability analysis was used to evaluate the reliability of the MMI from 2019-2021. 


Results
The seven-station 2019 MMI overall reliability was 0.43 resulting in a major review. Changes for 2020 included adding an extra station for the domain of cultural safety, a shift from two to one interviewer per station, and station specific training for interviewers. The 2020 overall reliability was 0.8, however several stations were reviewed to increase their internal consistency. The 2021 overall reliability was 0.84, with 7 of 8 stations being greater than 0.7. 


Discussion
Cyclical review and evaluation over three years resulted in substantial improvement in the reliability of the ACSEP MMI. The “marks” achieved by candidates likely reflect abilities across the curriculum domains, and may be utilised for high-stakes selection decisions. 


Conclusions
MMI selection processes can be reliable at small-scale. This research may inform the selection processes for programs where there are small applicant numbers. 


Implications for further research
Additional research is required to develop evidence to support other elements of the validity argument for the ACSEP MMI. 

References (maximum three) 

NA