Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Predicting student outcomes
Oral Presentation
Oral Presentation
2:00 pm
27 February 2024
Plenary 1
Session Program
2:00 pm
Ben Kumwenda1
Bonnie Lynch2, Angela Kubacki3 and Jon Dowell4
1 Centre for Medical Education, School of Medicine, University of Dundee.
2 Centre for Medical Education, University of Dundee
3 Institute of Medical and Biomedical Education, St George's, University of London
4 Scottish Graduate Entry Medicine (ScotGEM), School of Medicine, University of Dundee
Bonnie Lynch2, Angela Kubacki3 and Jon Dowell4
1 Centre for Medical Education, School of Medicine, University of Dundee.
2 Centre for Medical Education, University of Dundee
3 Institute of Medical and Biomedical Education, St George's, University of London
4 Scottish Graduate Entry Medicine (ScotGEM), School of Medicine, University of Dundee
Background:
The Multiple Mini Interview (MMI) is used internationally as a selection tool for medical school admissions. The MMI is a series of short, one-on-one interviews that assess such attributes as communication, problem-solving, and teamwork skills [1,2]. This study investigated the predictive validity of the MMI for the following outcome measures: medical school performance (Educational Performance Measure [EPM], Situational Judgement Test [SJT], Prescribing Safety Assessment [PSA]) and passing professional membership exams in medicine (RCGP, MRCP, MRCS). The study included data from two medical schools in the UK.
The Multiple Mini Interview (MMI) is used internationally as a selection tool for medical school admissions. The MMI is a series of short, one-on-one interviews that assess such attributes as communication, problem-solving, and teamwork skills [1,2]. This study investigated the predictive validity of the MMI for the following outcome measures: medical school performance (Educational Performance Measure [EPM], Situational Judgement Test [SJT], Prescribing Safety Assessment [PSA]) and passing professional membership exams in medicine (RCGP, MRCP, MRCS). The study included data from two medical schools in the UK.
Methods:
Data from 4990 doctors who graduated from UK medical schools and sat the first part of professional membership exams in 2017-2019 was used. The UK Medical Education Database[3] provided linked data from different sources, including medical school admissions, assessments, and postgraduate training. Multinomial logistic regression analyses estimated the odds of passing college membership exam on first attempt.
Data from 4990 doctors who graduated from UK medical schools and sat the first part of professional membership exams in 2017-2019 was used. The UK Medical Education Database[3] provided linked data from different sources, including medical school admissions, assessments, and postgraduate training. Multinomial logistic regression analyses estimated the odds of passing college membership exam on first attempt.
Results and Conclusion:
MMI was a significant predictor of medical school performance, even after controlling for other factors such as high school grades (UCAS scores) and clinical aptitude test (UCAT scores). The MMI was also a significant predictor of passing college exams on first attempt, but the effect size was smaller than for those assessments that occur nearer to postgraduate training - EPM, SJT, and PSA scores.
MMI was a significant predictor of medical school performance, even after controlling for other factors such as high school grades (UCAS scores) and clinical aptitude test (UCAT scores). The MMI was also a significant predictor of passing college exams on first attempt, but the effect size was smaller than for those assessments that occur nearer to postgraduate training - EPM, SJT, and PSA scores.
The findings suggest that the MMI is a valid predictor of educational success in both medical school and postgraduate training. Although the proportion of variance explained by MMI and all other predictors is small, MMI remains a valuable tool for medical school admissions. In the absence of innovations that can improve prediction, medical schools should continue using MMI in combination with other factors, such as UCAS and UCAT scores, to make admissions decisions.
References (maximum three)
1. Brownell K, Lockyer J, Collin T, Lemay J. (2007). Introduction of the multiple mini interview into the admissions process at the University of Calgary: Acceptability and feasibility. Med Teach 29(4):394–396.
2. Dowell, J., Lynch, B., Husbands, A., Kumwenda, B. (2012). The Multiple Mini-Interview in the UK context: Three years of experience at Dundee. Medical Teacher, 34, 297-304.
3. Dowell, J., et al. (2018). "The UK medical education database (UKMED) what is it? Why and how might you use it?" BMC Medical Education 18(1): 6.
2:15 pm
Aimee Gardner1
Paula Costa2
1 Baylor College of Medicine
2 SurgWise
Paula Costa2
1 Baylor College of Medicine
2 SurgWise
Introduction
Postgraduate medical education programs are actively searching for efficient, effective, and equitable solutions for selecting applicants into their programs. Situational judgment tests (SJTs) have been proposed as one approach to make headway on this complex issue. We perform a multi-institutional study examining how SJTs designed to measure an array of desirable competencies can predict multiple dimensions of performance through the first two years in residency training.
Methods
Applicants to general surgery residency programs in the United States completed unique program-specific SJTs as part of their application packets over three selection cycles. Performance along the dimensions of patient care, medical knowledge, systems-based practice, practice-based learning & Improvement, interpersonal & communication skills, and professionalism were collected for those selected into each program for the first two years in training. Descriptive statistics and correlations were computed, and then analysis of variance (ANOVA) tests were run to examine differences between four groups of SJT performance.
Results
Performance data were collected for PGY1 and PGY2 residents from seven surgery residency programs. SJT performance was positively related to patient care, medical knowledge, practice-based learning and improvement, professionalism, interpersonal and communication skills, and the overall milestone scores. In general, higher performance on the SJT resulted in higher performance later in training, with the exception of interpersonal and communication skills. Residents who did not complete an SJT assessment performed significantly worse overall compared to all other trainees.
Discussion
SJTs demonstrate promise for assessing important noncognitive attributes in residency applicants and align with international efforts to review candidates more holistically and minimize potential biases. These findings demonstrate the value of the SJT methodology in postgraduate medical education selection and predicting future in-training performance
References (maximum three)
1. Cullen MJ, Zhang C, Sackett PR, Thakker K, Young JQ. Can a Situational Judgment Test 235 Identify Trainees at Risk of Professionalism Issues? A Multi-Institutional, Prospective Cohort 236 Study. Acad Med. 2022 Oct 1;97(10):1494-1503.
2. Willis RE, Kempenich JW, Patnaik R, Dent DL. Identifying Potential Attrition during the 241 Residency Applicant Screening Process Using a Situational Judgment Test. J Surg Educ. 2022 242 Nov-Dec;79(6):e103-e108.
3. Gardner AK, Dunkin BJ. Evaluation of Validity Evidence for Personality, Emotional Intelligence, 245 and Situational Judgment Tests to Identify Successful Residents. JAMA Surg. 2018 May 246 1;153(5):409-416.
2:30 pm
Shanika Nanayakkara1
Heiko Spallek1, Delyse Leadbeatter1 and Jinlong Gao1
1 The University of Sydney
Heiko Spallek1, Delyse Leadbeatter1 and Jinlong Gao1
1 The University of Sydney
Background
Assessments for selection to dentistry are expected to effectively identify applicants who will succeed in the program and subsequently become an effective member of the profession upon graduation. Assessment tools used in dental admissions are complex and diverse among various universities and across different legislations(1). The Graduate Australian Medical School Admissions Test (GAMSAT) is widely applied in admissions of postgraduate entry programs in medicine and dentistry across Australia(2).
Summary of work
This study examined the relationship between GAMSAT scores and academic performance of students enrolled from 2016-2021 in the Doctor of Dental Medicine program at the University of Sydney.
Results
Admission and academic performance data for 553 students were analyzed. A weak negative correlation was observed between GAMSAT section II and III scores. No statistically significant correlations were present between GAMSAT section and overall scores, and students’ performance in preclinical and clinical assessments.
Discussion
GAMSAT alone cannot accurately predict dental students’ academic performance. This analysis was limited to a single center data with limited timespan. A multi-center prospective study is recommended to further evaluate the value of GAMSAT in dental admissions. This study is not assessing the ultimate outcome that all dental education institution desire, that their graduates improve the oral health of the population—however, this study represents a valuable step in this direction.
Conclusion
Over emphasizing on GAMSAT scores in dental admissions may not result in selection of applicants who are suitable and ready for dentistry.
Take-home messages / implications for further research or practice
- Further research to evaluate the predictive validity of GAMSAT scores in dental admissions on students’ academic performance are warranted.
- GAMSAT section and overall scores provide weak prediction on students’ academic performance in dentistry.
- There is a need for selection instruments to achieve optimal outcomes in dental admissions.
References (maximum three) References
- Cunningham C, Kiezebrink K. Insights on selection of undergraduate dental students. European Journal of Dental Education. 2022 Jul 22.
- Sladek RM, Bond MJ, Frost LK, Prior KN. Predicting success in medical school: a longitudinal study of common Australian student selection tools. BMC medical education. 2016 Dec;16:1-7.
2:45 pm
Colleen Robb1
Pierre Banks2, Liesel Copeland3, Alex MacIntosh1 and Kelly Dore1,4
1 Acuity Insights
2 UTMB John Sealy School of Medicine
3 Robert Wood Johnson Medical School - Rutgers University
4 McMaster University
Pierre Banks2, Liesel Copeland3, Alex MacIntosh1 and Kelly Dore1,4
1 Acuity Insights
2 UTMB John Sealy School of Medicine
3 Robert Wood Johnson Medical School - Rutgers University
4 McMaster University
Holistic admissions, a process where admission committees assess each applicant’s experiences and personal characteristics in tandem with their academic preparedness, is a key strategy to admitting a diverse study body who will succeed in-program and later in the workforce(1). Accordingly, healthcare education programs are increasingly employing holistic admissions. Despite the benefits, holistic admissions are both resource and time intensive and traditional methods for assessing personal characteristics prior to interview (e.g., reference letters) have evidenced poor reliability and predictive validity(2), but other options have the potential to bring the right candidates to the interview. Open-response situational judgement tests (SJT), for example, have shown promise in providing reliable and valid measures of personal characteristics and tend to produce smaller demographic differences than close-ended assessments(3).
Casper, an open-response SJT used widely in Health Professions programs across North America and Australia, has adopted a new format which includes an audio-video response component. While previous work has shown that this new format further reduces demographic differences, this study aims to examine the predictive validity of the new format with regard to multiple-mini interview (MMI) performance.
Admissions data collected from 1,011 interviewees to UTMB John Sealy School of Medicine and Rutgers Robert Wood Johnson Medical School (RWJMS) was used to evaluate predictive validity. A single-predictor bivariate logistic regression model for each school, found that for every one-unit increase in raw Casper score (scale from 1-9), odds of receiving a high MMI score (as determined by each program) increased by 159.50% for UTMB (OR: 2.56, 95%CI[2.07, 3.30]) and by 66.78% for Rutgers RWJMS (OR: 1.67, 95%CI[1.08, 2.65]).
These findings support Casper as a valuable tool within the holistic admissions framework by identifying candidates early in the admissions process with strong personal characteristics who are likely to perform well later in the interview stage.
References (maximum three)
(1) Glazer, G., Danek, J., Michaels, J., Bankston, K., Fair, M., Johnson, S., & Nivet, M. (2014). Holistic admissions in the health professions: Findings from a national survey. Urban Universities for HEALTH. https://cdn.ymaws.com/www.aptrweb.org/resource/collection/9BC8166F-3A40-4028-B091- C78275D40573/Holistic_Admissions_in_the_Health_Professions.pdf
(2) Salvatori, P. (2001). Reliability and validity of admissions tools used to select students for the health professions. Advances in Health Sciences Education, 6, 159-175. https://link.springer.com/article/10.1023/A:1011489618208
(3) Lievens, F., Sackett, P. R., Dahlke, J. A., Oostrom, J. K., DeSoete, B. (2019). Constructed response formats and their effects on minority-majority differences and validity. Journal of Applied Psychology, 104(5), 715-726. https://psycnet.apa.org/doi/10.1037/apl0000367
3:00 pm
Umatul Khoiriyah1
1 Department of Medical Education, Faculty of Medicine, Unversitas Islam Indonesia
1 Department of Medical Education, Faculty of Medicine, Unversitas Islam Indonesia
Background
Students' selection in medical institutions is crucial in predicting whether a student could succeed during medical training and become a competent clinician. Faculty of Medicine Universitas Islam Indonesia (FM UII) has developed two stages- examination in the selection method. Stage 1 consists of a written examination. Stage 2 includes an aptitude test, basic medical knowledge test, and multiple mini-interviews (understanding of a reading text, motivation, and collaborative learning practice). This study aimed to identify whether each selection component could predict students' performance.
Students' selection in medical institutions is crucial in predicting whether a student could succeed during medical training and become a competent clinician. Faculty of Medicine Universitas Islam Indonesia (FM UII) has developed two stages- examination in the selection method. Stage 1 consists of a written examination. Stage 2 includes an aptitude test, basic medical knowledge test, and multiple mini-interviews (understanding of a reading text, motivation, and collaborative learning practice). This study aimed to identify whether each selection component could predict students' performance.
Summary of works
Data from 160 students in the final year was included in this study. It consists of data from the selection process, final GPA in the medical undergraduate program, and tutorial performance. Each data from the selection process was correlated with GPA. The score of collaborative learning practice was analyzed with tutorial performance. Lastly, linear regression analysis was applied with GPA as the independent variable.
Summary of results
The score on written examinations, medical knowledge tests, and collaborative learning significantly correlate with GPA (p< 0.05). The aptitude (IQ) and reading test scores do not significantly correlate with GPA. On the other hand, collaborative learning score also has a significant correlation with tutorial performance during the undergraduate program. The results of regression analysis show that written examination medical knowledge test scores significantly affect students' GPA ( P< 0.05)
Discussion
The cognitive test applied in students' selection could predict students learning performance during the medical program. On the other hand, IQ was not an essential component in predicting learning performance. Moreover, students' capability to conduct collaborative learning also predict their capability to conduct small group discussion, which is the primary learning activity in FM UII.
Take home message
The design of the selection method should align with essential characteristics needed during medical training.
References (maximum three)
Alam, F., Lim, Y.C., Chaw, L.L. et al. Multiple mini-interviews is a predictor of students’ academic achievements in early undergraduate medical years: a retrospective study. BMC Med Educ.2023, 23, 187.
Patterson, F.; Knight, A.; Dowell, J.; Nicholson, S.; Cousans, F.; Cleland, J. How effective are selection methods in medical education? A systematic review. Med. Educ. 2016, 50, 36–60.
3:15 pm
Sharona Kanofsky1
Marla Nayer1, Peter Tzakas1 and Melissa Hynes1
1 University of Toronto
Marla Nayer1, Peter Tzakas1 and Melissa Hynes1
1 University of Toronto
Background
Admissions to healthcare programs is a competitive process. Selection criteria should predict the best future healthcare providers (HCPs). Past academic scores predict future scores, but beyond grades, do selection criteria truly predict the best HCPs? This study aimed to determine if the selection process in one Physician Assistant (PA) program in Canada predicts student performance.
Admissions to healthcare programs is a competitive process. Selection criteria should predict the best future healthcare providers (HCPs). Past academic scores predict future scores, but beyond grades, do selection criteria truly predict the best HCPs? This study aimed to determine if the selection process in one Physician Assistant (PA) program in Canada predicts student performance.
Summary of work
University of Toronto’s PA program admissions process consists of an application file review and a multiple mini interview (MMI). File review of approximately 1000 applicants includes scores for grade point average, prior healthcare experience, personal statements, and references. Approximately 100 candidates are invited to the MMI based on the file review score. Of these, 30 candidates are admitted based on MMI score.
In this study, four PA faculty ranked former students in three graduating cohorts base on the faculty’s impression of overall performance at graduation. Faculty placed each student in upper 20%, middle 60%, or lower 20% of their class. Rankings were compared to admissions components.
Results
Faculty ranking correlated significantly with file review scores. There was strong inter-faculty agreement. Faculty ranking did not correlate with candidate ranking post-MMI.
Discussion
Healthcare program admissions is resource intensive. Evidence can inform effectiveness, efficiency, and fairness. Our findings support the file review process by demonstrating that
faculty impressions and file review scores correlate well. Consistent with the literature on MMIs, we found no correlation between faculty and post-MMI ranking. Study limitations include a small sample size, and that two faculty were non-clinical.
Conclusions
Faculty rankings at the end of PA education correlated well with file review ranking.
Take-home messages and implications
Selecting students for healthcare education is challenging. We must determine which selection components best predict successful HCPs. This study should continue to collect more data and further investigate the contribution of MMIs.
Selecting students for healthcare education is challenging. We must determine which selection components best predict successful HCPs. This study should continue to collect more data and further investigate the contribution of MMIs.
References (maximum three)
Henderson, M. C., C. J. Kelly, E. Griffin, T. R. Hall, A. Jerant, E. M. Peterson, J. A. Rainwater, F. J. Sousa, D. Wofsy and P. Franks (2018). "Medical school applicant characteristics associated with performance in Multiple Mini-Interviews versus traditional interviews: A multi-Institutional study." Academic Medicine 93(7): 1029-1034.
Pau, A., K. Jeevaratnam, Y. S. Chen, A. A. Fall, C. Khoo and V. D. Nadarajah (2013). "The Multiple Mini-Interview (MMI) for student selection in health professions training – A systematic review." Medical Teacher 35(12): 1027-1041.
Rees, E. L., A. W. Hawarden, G. Dent, R. Hays, J. Bates and A. B. Hassell (2016). "Evidence regarding the utility of multiple mini-interview (MMI) for selection to undergraduate health programs: A BEME systematic review: BEME Guide No. 37." Medical Teacher 38(5): 443- 455.