Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Assessment in entry level health professional education (primary / undergraduate)
Oral Presentation
Oral Presentation
2:00 pm
27 February 2024
M210
Session Program
2:00 pm
Karen Donald1
Travis Haber2, Deb Virtue1, Jessica Lees1, Jessica Stander1, Samantha Byrne3, Elaina Kefalianos1, Nicole Hill1, Bronwyn Tarrant11, Lisa Cheshire1, Anthea Cochrane11, Tamara Clements11, Lauren Story1, and Miki Maruyama1
1 The University of Melbourne
2 Melbourne Dental School, The University of Melbourne
Travis Haber2, Deb Virtue1, Jessica Lees1, Jessica Stander1, Samantha Byrne3, Elaina Kefalianos1, Nicole Hill1, Bronwyn Tarrant11, Lisa Cheshire1, Anthea Cochrane11, Tamara Clements11, Lauren Story1, and Miki Maruyama1
1 The University of Melbourne
2 Melbourne Dental School, The University of Melbourne
Background
Health professional students frequently learn, practise, and demonstrate key professional and technical skills in practical class settings. As such, “attendance hurdles” are frequently applied to practical classes.
Health professional students frequently learn, practise, and demonstrate key professional and technical skills in practical class settings. As such, “attendance hurdles” are frequently applied to practical classes.
Summary of work
This study explores if attendance hurdles for practical classes are based on sound pedagogy and describes staff experiences, attitudes, and beliefs about attendance hurdle practices.
A scoping review of the literature was undertaken to establish the evidence for attendance at practical classes and its correlation to performance in Medicine, Dentistry, Health Sciences and Science courses, in tertiary education settings. Building on the findings of this scoping review, we surveyed a purposeful sample of 68 academic staff in Medicine, Dentistry, Optometry, Physiotherapy, Social work, Nursing, Speech Pathology, and Audiology to determine their experiences, attitudes and beliefs about attendance hurdles for practical classes.
Results
Demographic data of survey participants and rates and reasons for maintaining or abolishing attendance hurdles will be presented. We will describe staff beliefs about why attendance hurdles are needed, and how student attendance, skill acquisition and competency might be maintained in the absence of hurdle requirements.
Discussion
This study describes the experiences, attitudes and beliefs related to attendance hurdles for practical classes in health professional education and compares these results to the findings of the scoping review, potentially informing recommendations for the use of attendance hurdles for practical classes. Furthermore, we will discuss how these results address gaps in the literature and highlight future research.
Conclusions
The study will be of interest to health professional educators who teach subjects with practical classes. The scoping review and current study may inform assessment design, policy and organisational approaches to attendance hurdles for practical classes.
Implications for further research
Further research to explore students' experiences, attitudes and beliefs about attendance hurdles for practical classes should be sought.
References (maximum three)
Best, R., & Best, R. (2009). The use of assessment hurdles: Pedagogy v. practicality. In Australasian Universities Building Educators Conference (https://pure.bond.edu.au/ws/portalfiles/portal/29061471/The_use_of_assessment_hurdles.pdf )
Lamb, S., Chow, C., Lindsley, J., Stevenson, A., Roussel, D., Shaffer, K., & Samuelson, W. (2020). Learning from failure: how eliminating required attendance sparked the beginning of a medical school transformation. Perspectives on medical education, 9, 314-317.
Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evidence Synthesis 2020; 18.
2:15 pm
Matthew Sibbald1
Tom Alexander1, Haroon Yousuf1, Deborah Azim Fleming1 and Adrian Alexander2
1 Michael G. DeGroote School of Medicine, McMaster University, Hamilton, CANADA.
2 Alfred Health, Melbourne, VIC, AUSTRALIA
Tom Alexander1, Haroon Yousuf1, Deborah Azim Fleming1 and Adrian Alexander2
1 Michael G. DeGroote School of Medicine, McMaster University, Hamilton, CANADA.
2 Alfred Health, Melbourne, VIC, AUSTRALIA
Background:
Narrative assessments are ubiquitous in medical student assessments.1 There is a paucity of research into the validity, reliability or quality of narrative assessments particularly for undergraduate medical students.1 There is some preliminary research and emerging evidence on what constitutes quality in narrative assessments.1
Importance:
There is little research into how student assessment in general; and narrative assessments in particular, varied in quality during the “far”pandemic/virtual state as compared to the “near” pre-pandemic/in- person state.
Method:
Faculty raters applied the 7-point narrative quality index drawn from Chakroun et al.comprehensive scoping review while blinded to whether the de-identified and randomly drawn narrative assessment from our database, was created in the pandemic/virtual vs. pre-pandemic/in-person states and whether they resulted in a satisfactory result or a provisional satisfactory result requiring remediation. In addition to assigning quality estimates to the narrative assessments, faculty raters guessed the state in which the narrative assessment was assigned and whether the result was satisfactory or provisional satisfactory requiring remediation. Our database was analyzed for main effect statistical differences and correlations of quality across a number of predictive variables between the pandemic and pre-pandemic states.
Workshop format:
Active learning workshop format using interactive participant e- polling platforms sequenced with progressive reveal of our results to promote engagement and discussion.
Who should participate:
Learners and faculty interested in medical student assessment and medical education research.
Level of workshop:
Intermediate.
Workshop outcomes:
1) Review the literature on narrative assessments for medical students
2) Describe 3 challenges for faculty assessors in writing narrative assessments for medical students in the pre-clerkship.
3) Compare estimates of narrative assessment quality from your setting and context with our data during a progressive reveal of study results.
4) Implement applications of virtual vs. in-person assessment in your setting with greater specificity based on our literature review and study results.
Narrative assessments are ubiquitous in medical student assessments.1 There is a paucity of research into the validity, reliability or quality of narrative assessments particularly for undergraduate medical students.1 There is some preliminary research and emerging evidence on what constitutes quality in narrative assessments.1
Importance:
There is little research into how student assessment in general; and narrative assessments in particular, varied in quality during the “far”pandemic/virtual state as compared to the “near” pre-pandemic/in- person state.
Method:
Faculty raters applied the 7-point narrative quality index drawn from Chakroun et al.comprehensive scoping review while blinded to whether the de-identified and randomly drawn narrative assessment from our database, was created in the pandemic/virtual vs. pre-pandemic/in-person states and whether they resulted in a satisfactory result or a provisional satisfactory result requiring remediation. In addition to assigning quality estimates to the narrative assessments, faculty raters guessed the state in which the narrative assessment was assigned and whether the result was satisfactory or provisional satisfactory requiring remediation. Our database was analyzed for main effect statistical differences and correlations of quality across a number of predictive variables between the pandemic and pre-pandemic states.
Workshop format:
Active learning workshop format using interactive participant e- polling platforms sequenced with progressive reveal of our results to promote engagement and discussion.
Who should participate:
Learners and faculty interested in medical student assessment and medical education research.
Level of workshop:
Intermediate.
Workshop outcomes:
1) Review the literature on narrative assessments for medical students
2) Describe 3 challenges for faculty assessors in writing narrative assessments for medical students in the pre-clerkship.
3) Compare estimates of narrative assessment quality from your setting and context with our data during a progressive reveal of study results.
4) Implement applications of virtual vs. in-person assessment in your setting with greater specificity based on our literature review and study results.
References (maximum three)
1) Chakroun, Molk et al., Quality of Narratives in Assessment: Piloting a List of Evidence- Based Quality Indicators, Perspectives in Medical Education, 12(1) May 26, 2023
2) Hatala, Rose et al. Using in-training evaluation report(ITER) qualitative comments to assess medical students and residents: a systematic review. Academic Medicine, 92(6) 2017, p. 868- 79.
3) Chan, Teresa, Monteiro, Sandra et al., The Quality of Assessment of Learning(QuAL) score: Validity evidence for a scoring system aimed at Rating short, workplace based comments on trainee performance, Teaching and Learning in Medicine, an International Journal Vol. 32, Issue 3. 2020.
2:30 pm
Nara Jones1,2
Linda Grose1, Francesco Amico1 and Conor Gilligan3,1
1 University of Newcastle
2 University of Tasmania
3 EACH
Linda Grose1, Francesco Amico1 and Conor Gilligan3,1
1 University of Newcastle
2 University of Tasmania
3 EACH
Background
The surgical viva voce exam (Viva) at our institution is a high stakes assessment delivered to over 200 students per year through multiple multi-question exam stations. Ensuring its integrity proves challenging for heterogenous reasons, including reliance on time-poor clinicians as examiners.
Concerns raised in 2022 over assessor variability, fail rates and student feedback mandated a structural revision. Questions, marking guidance, and exam delivery, were modified aiming for improved exam validity and reliability.
Summary of work
We re-wrote exam questions to improve their ability to explore the students’ clinical knowledge. We constructed example answers and assessor guidance mapped to the marking schema. These were provided to assessors for calibration, along with a short online briefing.
Each student was assessed by eight different assessors across the stations to reduce bias. Each station was standard set separately using the Borderline Regression Method to limit the impact of variable station difficulty.
A hybrid, online and face-to-face, delivery was chosen to enhance assessor recruitment and academic integrity. A survey is scheduled in November 2023, to explore assessor experience.
Results
Data obtained from half of the 2023 cohort demonstrates low assessor standard deviations, strong station statistics, and perceived improvements in the quality and specificity of assessor feedback. Similar results are expected for the full cohort in November 2023.
Discussion
Unintended consequences of such approaches include rigidity which limits the capacity of assessors to apply expert judgement and can force student results into pre-established categories. Such limitation can be counterbalanced by selecting ad-hoc sub-specialty examiners.
Conclusions
Surgery Viva revision proved beneficial to overall exam quality.
Surgery Viva revision proved beneficial to overall exam quality.
Take-home messages/ implications for further research or practice
A proactive approach is paramount for continuous revision of assessment methods contributing significantly towards student progression decision in healthcare education.
References (maximum three)
1. Imran M, Doshi C, Kharadi D. Structured and unstructured viva voce assessment: A double- blind, randomized, comparative evaluation of medical students. Int J Health Sci (Qassim). 2019 Mar-Apr;13(2):3-9. PMID: 30983939; PMCID: PMC6436443.
2:45 pm
Heeyoung Han1
1 Southern Illinois University School of Medicine
1 Southern Illinois University School of Medicine
Background
Medical schools collect learner assessment and evaluation data regularly, yet its utilization for curriculum innovation can be hindered by a lack of centralized data governance, i.e., Dashboard. Schools need various technologies to support their curricula, which makes it harder to integrate data from different systems into a dashboard, especially at resource-scarce medical schools. Our school is a community-based medical school with limited IT staff. This presentation will share our organizational change process to develop and implement dashboards and its impact on curricula innovation.
Summary of work
We went through a longitudinal change process to centralize educational data governance, including collection, management, and usage, through dashboard development. Given limited resources, we minimized investment in in-house development but focused on change processes to forming groups, digitizing data, system integration, and stakeholder engagement. The change processes were coupled with the school's innovation toward programmatic assessment. We conducted students’ participatory program evaluation to understand stakeholders’ adoption and its impact on teaching and learning.
Results
We formed an Educational Informatics Committee at the school in 2015 and have re- engineered our assessment and evaluation programs and processes. Student data dashboard has been implemented since 2021, which successfully supported curriculum change toward programmatic assessment. A faculty teaching dashboard has been implemented since 2022 for faculty development.
Discussion
Our experience can be a model for other schools, especially resource-scarce institutions, to adopt, develop, and implement dashboards. While presenting our story, we will discuss relationships, strategic organizational change processes, and utility-focused program evaluation to highlight lessons learned from the successful change process with little to no development team.
Conclusions
Dashboards can be instrumental in student and faculty development. Thoughtful organizational change processes should be considered, especially for resource-scarce schools.
Take-home messages
Developing dashboards is an organizational change process. Future practice should explore feasible models for resource-scarce medical schools.
References (maximum three)
Han, H., Mosley, M., (Yvette) Igbokwe, I., Tischkau, S. (2022). Institutional Culture of Student Empowerment: Redefining the Roles of Students and Technology. In: Witchel, H.J., Lee, M.W. (eds) Technologies in Biomedical and Life Sciences Education. Methods in Physiology. Springer, Cham. https://doi.org/10.1007/978-3-030-95633-2_3
Han, H., Resch, D. R., Kovach, R. A. (2013) Educational Technology in Medical Education, Teaching and Learning in Medicine, 25:sup1, S39-S43, DOI: 10.1080/10401334.2013.842914
3:00 pm
Mike Tweed1
Robin Willink1 and Tim Wilkinson1
1 University of Otago
Robin Willink1 and Tim Wilkinson1
1 University of Otago
Background
In clinical practice, the certainty a clinician has in their clinical-decisions, including whether they need to seek further resources, is important. Such self-monitoring can be assessed in medical students using response certainty in MCQs, but how might it be followed over time?
Summary of work
With each answer on MCQ progress tests, medical students provided their certainty of its correctness. We have proposed aspects of self-monitoring as including insightfulness (increasing correctness with rising certainty), safety (high probability correctness for ‘high certainty’ responses) and efficiency (low probability correctness for ‘no certainty’ responses). Each could be classified as present, absent, or undetermined. A tracking system was developed using first principles and data from one cohort of students, and a dataset from a second cohort was used as an independent check.
Results
The patterns of aspects of self-monitoring were consistent across both cohorts. Nearly all the students met the criteria for insightfulness on all tests. Most of the students met the criteria for efficiency, with the highest prevalence mid-course, whereas the absence of efficiency increased later. Most safety results were undermined, but when a definitive result was obtained, it was more likely to be absent mid-course, and present later in the course.
Discussion
Throughout the course, students showed reassuring levels of insightfulness. The results suggest that students may balance safety with efficiency. This may be explained by students learning the positive implications of decisions earlier, becoming more efficient; and negative implications later, becoming more cautious and safer.
Conclusion
Analysis of correctness for different levels of certainty allowed for the tracking of self- monitoring as students progressed through the course, and revealed differences in insightfulness, safety, and efficiency.
Take-home messages
Item response certainty has the potential to introduce self-monitoring and track it.
Item response certainty has the potential to introduce self-monitoring and track it.
References (maximum three)
Johnson WR, Durning SJ, Allard RJ, Barelski AM, Artino Jr AR. A scoping review of self‐monitoring in graduate medical education. Medical Education 2023.
McConnell MM, Regehr G, Wood TJ, Eva KW. Self-monitoring and its relationship to medical knowledge. Advances in Health Sciences Education 2012; 17(3): 311-23.
Tweed M, Purdie G, Wilkinson T. Low performing students have insightfulness when they reflect‐in‐action. Medical Education 2017; 51(3): 316-23.