Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Selection and other transitions between levels and settings of education
E Poster
ePoster
2:00 pm
27 February 2024
Exhibition Hall (Poster 1)
Session Program
2:00 pm
Elina Ng1
1 Curtin Medical School
1 Curtin Medical School
Background
Australian medical schools use multi-faceted admissions protocols to select students for their medical program, but the effectiveness of these tools is unclear.
Australian medical schools use multi-faceted admissions protocols to select students for their medical program, but the effectiveness of these tools is unclear.
Summary of work
This study nested within a longitudinal study examines the correlation between Situational Judgement Test (SJT) and Multiple Mini Interview (MMI) scores with well-being and personal characteristics that predict academic success for incoming medical students. The study uses six validated instruments to measure perceived stress, quality of life, and specific personal traits including Tolerance of Ambiguity (ToA), Grit, Conscientiousness and Academic Resilience (AR) for year 1, year 3, and final-year medical students every year. Data for all other variables are sourced from existing admissions and assessment database.
Results
Data collected suggests a moderate but significant correlation between SJT and MMI scores (r=0.299; p<0.01). None of the selection variables currently used predict success in Years 1, 3, and 5 of MBBS studies. Year 3 medical students report significantly lower QoL (mean=5.95, SD=1.44, N=60) and higher stress levels (mean=2.95, SD=0.71, N=60), compared to Year 1 students.
Discussion
It is noteworthy that the SJT scores within the UCAT-ANZ test have a significant correlation with the MMI scores. Further research can identify the specific overlap between the two assessment methods, streamlining the medical student selection process. The Undergraduate Medical Education Committee will discuss and investigate the lower quality of life and higher stress of Year 3 students, followed up by the Senior Leadership Group if necessary.
Conclusions
The primary outcome of the longitudinal study will be available in 2026 when the 2022 matriculating cohort graduates. The medical school will continuously monitor the primary and secondary outcome variables through an annual cross-sectional analysis of cumulative data collected.
Take-home message
A longitudinal study that tracks admissions, well-being, and assessment data is important for continuous quality improvement.
References (maximum three)
References
1. Lievens, F., Coetsier, P., De Fruyt, F. and De Maeseneer, J. (2002), Medical students' personality characteristics and academic performance: a five-factor model perspective. Medical Education, 36: 1050-1056. https://doi.org/10.1046/j.1365-2923.2002.01328.x
2. Burgis-Kasthala S, Elmitt N, Smyth L, Moore M. (2019). Predicting future performance in medical students. A longitudinal study examining the effects of resilience on low and higher performing students. Med Teach. 41(10):1184–91.
3. Patterson F, Cleland J, Cousans F. (2017). Selection methods in healthcare professions: where are we now and where next? Adv Health Sci Educ. 2;22(2):229–42.
2:05 pm
Nicola Claudius
Sandra Carr1, Kiah Evans, Rachel Collins and Timothy Ford
1 The University of Western Australia
Sandra Carr1, Kiah Evans, Rachel Collins and Timothy Ford
1 The University of Western Australia
BACKGROUND
Multiple Mini Interviews (MMI) have become an increasingly common selection assessment since their development in 2004 (1). The use of MMI as a selection tool for entry into undergraduate medical schools is widely examined in the literature. Despite regular use as a selection assessment for postgraduate sub-specialty physician training across the world, there is little published evidence on the utility of MMI for this purpose (2). This scoping review aims to address this gap by exploring, analysing and synthesising published research evidence on the utility of using MMI as a selection assessment for physician specialty training.
Multiple Mini Interviews (MMI) have become an increasingly common selection assessment since their development in 2004 (1). The use of MMI as a selection tool for entry into undergraduate medical schools is widely examined in the literature. Despite regular use as a selection assessment for postgraduate sub-specialty physician training across the world, there is little published evidence on the utility of MMI for this purpose (2). This scoping review aims to address this gap by exploring, analysing and synthesising published research evidence on the utility of using MMI as a selection assessment for physician specialty training.
SUMMARY OF WORK
The scoping review is being conducted in accordance with the JBI methodology for scoping reviews and the protocol has been submitted. A search of PubMed and OVID (MedLine and MedBase) is underway to identify relevant research studies published in peer-reviewed journals between January 2004 and July 2023. Resulting articles will be reviewed by two independent reviewers to reach consensus on eligibility. Data will be extracted from the full text of included articles, including study characteristics and MMI utility aspects according to Van Der Vleuten Framework (reliability, validity, educational impact, acceptability and feasibility) (3). Once analysed, data will be synthesised in tables, descriptive statistics, key themes and implications for practice.
TAKE-HOME MESSAGES/ IMPLICATIONS FOR FURTHER RESEARCH
Results, discussion points, conclusions and take-home messages will be available to present at Ottawa 2024. Recommendations will be made for further research to address any evidence- gaps that are identified through the scoping review, along with how to progress this field of research to the next level. Implications for postgraduate selection assessments for physician specialty training will be outlined, with corresponding recommendations to facilitate optimal utility of this process.
References (maximum three)
1. Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: the multiple mini- interview. Med Educ. 2004;38(3):314-26.
2. Roberts C, Khanna P, Rigby L, Bartle E, Llewellyn A, Gustavs J, et al. Utility of selection methods for specialist medical training: A BEME (best evidence medical education) systematic review: BEME guide no. 45. Med Teach. 2018;40(1):3-19.
3. Van Der Vleuten CP. The assessment of professional competence: Developments, research and practical implications. Adv Health Sci Educ Theory Pract. 1996;1(1):41-67.
2:10 pm
Munaib Chowdhury1
Suhyun Youn1, Maria Karavassilis1, Dmitriy Chernov1,Thomas Braby1, and Senan Devendra1,
1 West Hertfordshire Teaching Hospitals NHS Trust
Suhyun Youn1, Maria Karavassilis1, Dmitriy Chernov1,Thomas Braby1, and Senan Devendra1,
1 West Hertfordshire Teaching Hospitals NHS Trust
The transition from medical student to junior doctor is challenging. To ease this, the Preparing for Patients (PfP) program was developed for medical students in the United Kingdom as their final placement before entering clinical practice. The program’s aim is to enable medical students to think and act like a doctor. However, the current curriculum lacked structure and guidance to accurately reflect competencies expected of a new junior doctor.
We present the Zuboshi Bullseye program, a list of 50 tasks for medical students from University College London to try to complete during their four-week PfP block. The list was created with input from students, junior and senior clinicians, and designed to ensure they reflected the key competencies expected of a junior doctor, encompassing clinical and communication skills and cultural safety. Group A was assessed in 2021, with progress assessed retrospectively. In 2022, Group B was introduced to the tasks prospectively. Finally, Group C was informed at the beginning that prizes would be awarded for the highest achievers. Feedback was also collected about the level of preparedness students felt at the end of the attachment.
We observed an increase in progress year-on-year. In Group A, the mean number of tasks achieved was 6.5 (n=6, SD=1.76), increasing to 16.2 in Group B (n=24, SD=6.6) and 22.9 in Group C (n=11, SD=3.96, p=0.003). Our results indicated that students preferred a clearer structure when navigating the placement and felt more confident in their preparedness. Incentivization of progress had a significant positive impact on progression. Revisions of the list improved overall impressions of utility, suggesting that keeping the list dynamic maximised the overall benefits of the block.
In conclusion, we have shown that a locally adapted incentivized task list improves the reported preparedness of medical students in their final clinical placement.
References (maximum three)
General Medical Council (2015) Promoting excellence: standards for medical education and training. Available at: https://www.gmc-uk.org/education/standards-guidance-and- curricula/standards-and-outcomes/promoting-excellence (Accessed: 5 May 2023).
2:15 pm
Sira Vachatimanont1,2
Panot Sainamthip3,4, Talent Theparee5,6, Aisawan Petchlorlian7,8, Nattaya Poovorawan5,9, Phattrawan Pisuchpen5,10, Danai Wangsaturaka4 and Nijasri Charnnarong Suwanwela5,11
1 Chulalongkorn University
2 King Chulalongkorn Memorial Hospital, Thai Red Cross Society
3 Division of Academic Affairs, Faculty of Medicine, Chulalongkorn University
4 Department of Pharmacology, Faculty of Medicine, Chulalongkorn University
5 Chulalongkorn University International Doctor of Medicine Programme (CU-MEDi), Faculty of Medicine Chulalongkorn University
6 Department of Pathology, Faculty of Medicine, Chulalongkorn University
7 Geriatric Excellent Center, Faculty of Medicine, Chulalongkorn University
8 Department of Medicine, King Chulalongkorn Memorial Hostpital
9 Medical Oncology Unit, King Chulalongkorn Memorial Hostpital
10 Department of Ophthalmology, Faculty of Medicine, Chulalongkorn University
11 Department of Medicine, Faculty of Medicine, Chulalongkorn University
Panot Sainamthip3,4, Talent Theparee5,6, Aisawan Petchlorlian7,8, Nattaya Poovorawan5,9, Phattrawan Pisuchpen5,10, Danai Wangsaturaka4 and Nijasri Charnnarong Suwanwela5,11
1 Chulalongkorn University
2 King Chulalongkorn Memorial Hospital, Thai Red Cross Society
3 Division of Academic Affairs, Faculty of Medicine, Chulalongkorn University
4 Department of Pharmacology, Faculty of Medicine, Chulalongkorn University
5 Chulalongkorn University International Doctor of Medicine Programme (CU-MEDi), Faculty of Medicine Chulalongkorn University
6 Department of Pathology, Faculty of Medicine, Chulalongkorn University
7 Geriatric Excellent Center, Faculty of Medicine, Chulalongkorn University
8 Department of Medicine, King Chulalongkorn Memorial Hostpital
9 Medical Oncology Unit, King Chulalongkorn Memorial Hostpital
10 Department of Ophthalmology, Faculty of Medicine, Chulalongkorn University
11 Department of Medicine, Faculty of Medicine, Chulalongkorn University
Background:
Remote multiple mini-interviews (MMIs) raises potential concerns, including viability, integrity, and validity of the MMI[1,2]. Our medical school had to conduct MMIs for candidate selections remotely for two years in adaptation to the pandemic. We presented this abstract to share our experience of administering large-scale high-stake MMIs remotely.
Remote multiple mini-interviews (MMIs) raises potential concerns, including viability, integrity, and validity of the MMI[1,2]. Our medical school had to conduct MMIs for candidate selections remotely for two years in adaptation to the pandemic. We presented this abstract to share our experience of administering large-scale high-stake MMIs remotely.
Summary of Work:
Because of the COVID-19 travel restrictions, we modified our MMI by using Zoom teleconferencing software, to allow equal access to the MMI of all candidates: Seventies candidates in the first and fifties candidates in the second year. With the time constrain of 25 candidates in 3 hours, the MMI had to be delivered in two rounds of four parallel loops. To ensure integrity, we created a separate virtual meeting room for each loop and assigned a proctor to perform a one-on-one registration process, akin to the internet based TOEFL exam. We created a breakout room for each interviewer with an additional proctor to manage the time and moving candidates between breakout rooms.
Because of the COVID-19 travel restrictions, we modified our MMI by using Zoom teleconferencing software, to allow equal access to the MMI of all candidates: Seventies candidates in the first and fifties candidates in the second year. With the time constrain of 25 candidates in 3 hours, the MMI had to be delivered in two rounds of four parallel loops. To ensure integrity, we created a separate virtual meeting room for each loop and assigned a proctor to perform a one-on-one registration process, akin to the internet based TOEFL exam. We created a breakout room for each interviewer with an additional proctor to manage the time and moving candidates between breakout rooms.
Results:
MMI interviewers were generally pleased with the experience, noting minimal intervention needed to conduct the exam. However, interviewers suggested that observation of non-verbal cues was limited due to the small field of view of webcams. Technical difficulties were minimal.
MMI interviewers were generally pleased with the experience, noting minimal intervention needed to conduct the exam. However, interviewers suggested that observation of non-verbal cues was limited due to the small field of view of webcams. Technical difficulties were minimal.
Discussion:
Our implementation suggested that the remote MMI format is feasible despite initial concerns. The format enables the assessment of potential candidates from abroad without incurring travel expenses. A few remaining limitations may be alleviated by future technology, such as adopting a whole-body virtual reality to capture additional non-verbal cues.
Our implementation suggested that the remote MMI format is feasible despite initial concerns. The format enables the assessment of potential candidates from abroad without incurring travel expenses. A few remaining limitations may be alleviated by future technology, such as adopting a whole-body virtual reality to capture additional non-verbal cues.
Conclusions:
Remote MMI can be a promising option for large-scale high-stake assessments and is worth further investigation and development.
Implications:
- Remote MMI is feasible for large-scale high-stake assessments, such as medical school entry selection
- Major limitation is limited assessment of candidates’ nonverbal cues.
References (maximum three)
- Hammond S, McLaughlin JE, Cox WC. Validity evidence for a virtual multiple mini interview at a pharmacy program. BMC Medical Education. 2023 Aug 3;23(1):551
- Sabesan V, Young L, Carlisle K, Vangaveti V, Vu T, Van Erp A, Kapur N. Effects of
- candidates’ demographics and evaluation of the virtual Multiple Mini Interview (vMMI) as a tool for selection into paediatric training in Queensland. Medical Teacher. 2023 Apr 4:1-7.
2:20 pm
Hedva Chiu1
Timothy Wood1, Adam Garber1, Wade Gofton1, Samantha Halman1, Janelle Rekman1 and Nancy Dudek1
1 The University of Ottawa
Timothy Wood1, Adam Garber1, Wade Gofton1, Samantha Halman1, Janelle Rekman1 and Nancy Dudek1
1 The University of Ottawa
Background:
Workplace-based assessment (WBA) is a recognized assessment method for competence in post-graduate medical education.1,2 Most WBA relies on physician supervisors. However, in a complex training environment where supervisors are unavailable to observe certain aspects of a trainee’s performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses’ assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopaedic Surgery. However, different clinical settings can impact a tool’s performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa (UO).
Workplace-based assessment (WBA) is a recognized assessment method for competence in post-graduate medical education.1,2 Most WBA relies on physician supervisors. However, in a complex training environment where supervisors are unavailable to observe certain aspects of a trainee’s performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses’ assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopaedic Surgery. However, different clinical settings can impact a tool’s performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa (UO).
Summary of work:
O-RON forms were distributed on the Internal Medicine, General Surgery, and Obstetrical wards at UO over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed.
O-RON forms were distributed on the Internal Medicine, General Surgery, and Obstetrical wards at UO over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed.
Results: 179 O-RONs were completed on 30 residents. With four forms per resident, the ORON’s reliability was 0.82. Global judgement response and frequency of concerns was correlated (r = 0.627, P<0.001). Exit interviews identified factors impacting form completion, which included heavy clinical workloads and larger volumes of residents.
Discussion:
Consistent with the original study, the findings demonstrated strong evidence for validity. However, the total number of forms collected was less than expected. This appears due to environmental factors.
Consistent with the original study, the findings demonstrated strong evidence for validity. However, the total number of forms collected was less than expected. This appears due to environmental factors.
Conclusion:
The O-RON is a useful tool to capture nurses’ assessment of trainee performance and demonstrated reliable results in various clinical settings. However, understanding the assessment environment and ensuring it has the capacity to perform this assessment is crucial for successful implementation.
The O-RON is a useful tool to capture nurses’ assessment of trainee performance and demonstrated reliable results in various clinical settings. However, understanding the assessment environment and ensuring it has the capacity to perform this assessment is crucial for successful implementation.
Implications for future research:
Input from nurses on resident performance is valuable and the O-RON captures this assessment. Future research should focus on how we can create conditions whereby implementing this tool is feasible from the perspective of nurses.
Input from nurses on resident performance is valuable and the O-RON captures this assessment. Future research should focus on how we can create conditions whereby implementing this tool is feasible from the perspective of nurses.
References (maximum three)
1. Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287(2):226-235. doi:10.1001/jama.287.2.226
2. Prentice S, Benson J, Kirkpatrick E, Schuwirth L. Workplace-based assessments in postgraduate medical education: A hermeneutic review. Medical Education. 2020;54(11):981- 992. doi:10.1111/medu.14221
2:25 pm
Suriyaarachchige Silva1
1 Fellow-AIDH, AF-AMEE, GP - Ochre Health
1 Fellow-AIDH, AF-AMEE, GP - Ochre Health
Background
Health Informatics (HI) being an extremely practical subject, the Capstone project of a Post Graduate HI course needs to be practically applicable to the real-world environment. Therefore, it’s important to assess such a project with criteria and methods that closely resembles the real- world environment.
Health Informatics (HI) being an extremely practical subject, the Capstone project of a Post Graduate HI course needs to be practically applicable to the real-world environment. Therefore, it’s important to assess such a project with criteria and methods that closely resembles the real- world environment.
Summary of work
A focused group discussion was performed among three specialists in HI, two Clinicians and one Medical Educationist. Important evaluation criteria and methods of assessment were identified as themes. Existing real-world frameworks were referred, to determine similar entities to those identified as themes, and to draft the framework.
Results
A comprehensive evaluation framework was developed using the “Health Matrix Network” (HMN), the mechanism proposed by the World Health Organization for evaluating Health Information Systems [1]. This has six standards and components for a HI system. The components of the HMN was further reinforced by including competency based parameters of assessment for a project [2]. Assessment modes associated with Project Based Learning, such as peer assessment and co-assessment [3], were included to resemble the dynamics of developing a real-world HI project.
Discussion
The developed framework represents the real-world HI solutions in two key aspects. The framework uses a real-world recommendation, the WHO’s HMN. Also, the framework replicates the real-world multi-team collaborative environment with the use of Project Based Learning tools. It ensures completeness by assessing learning objectives of all modules with the inclusion of competency based assessment.
Conclusions
As the Phase 2 of the study, this framework will be used to assess, more than 50 final year student projects from the first entrants of a newly developed postgraduate HI course.
Take-home messages / implications for further research or practice
An evaluation framework of a practical subject should include components that are used in the real-world practice of that subject.
References (maximum three)
[1] Health Metrics Network and World Health Organization, 2008. Assessing the national health information system: an assessment tool. World Health Organization.
[2] Fraile R, Argüelles I, González JC, Gutiérrez-Arriola JM, Benavente C, Arriero L, Osés D. A proposal for the Evaluation of Final Year Projects in a Competence-based Learning Framework. InIEEE EDUCON 2010 Conference 2010 Apr 14 (pp. 929-934). IEEE.
[3] Van den Bergh V, Mortelmans D, Spooren P, Van Petegem P, Gijbels D, Vanthournout G. New assessment modes within project-based education-the stakeholders. Studies in educational evaluation. 2006 Jan 1;32(4):345-68.