Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

Written / online knowledge-based assessment topics

Oral Presentation

Oral Presentation

11:30 am

28 February 2024

M214

Session Program

Keyna Bracken1
Amr Saleh2, Jeremy Sandor2, Matt Sibbald2, Michael Lee-Poy3 and Quang Ngo2
1 Michael G. DeGroote School of Medicine, McMaster University, AMEE member
2 Michael G. DeGroote School of Medicine, McMaster University
3 Michael G. DeGroote School of Medicine, McMaster




Assessment of learning, commonly summative in nature used to make learner progress decisions, and for learning, formative designed to provide low stakes, meaningful feedback to learners, typically cohabitate within a program of assessment. 1,2 

Ideally formative assessments should be predictive of subsequent summative progress challenges. While the need for academic remediation is well established, there is discrepancy on both the identification and effective intercessions for learners in difficulty.3 The findings of Landoll et al that early academic interventions can positively impact struggling learners prompts consideration of how programmatic assessment for learning with many data points may best support student learning.3 

To better understand the link between formative assessments and progress difficulty, we conducted an analysis of four student cohorts (2022 to 2025) in our undergraduate MD program by comparing formative assessment scores on concept application exercises (CAE) with subsequent progress difficulty. 

CAE scores designed to formatively assess knowledge translation are not formally incorporated into the progress decision at the end of each curricular unit which is holistic in nature. Students are referred to the Student Progress Committee (SPC) if they fail to meet the curricular objectives. 

To address the predictive power of CAE score characteristics, we constructed a binary logistic regression model using SPSS 26 with SPC referral as the dependent variable and CAE score characteristics as independent variables. 

We found that while a single CAE is predictive of progress difficulty, the average score is the most powerful predictor, with each point drop in average score associated with a 37% increased odds of subsequent progress difficulty. 

Our findings illustrate the predictive ability of the CAE to identify progress difficulty. Next steps include analysis of low CAE scores with other indicators of programmatic assessment to identify a threshold after which adverse progress outcomes can be inferred by the CAE. 



References (maximum three) 

1. Watling C, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53: 76-85. https://doi.org/ 10.1111/medu.13645 

2. Heeneman S, Oudkerk Pool A, Schuwirth L, van der Vleuten C & Driessen E. The impact of programmatic assessment on student learning: theory versus practice. Med Educ. 2015; 49: 487–498. https://doi.org/ 10.1111/medu.12645 

3. Landoll R, Bennion L, Maranich A et al. Extending growth curves: a trajectory monitoring approach to identification and interventions in struggling medical students learners. Adv Health Sci Educ. 2022; 27:645–658. https://doi.org/10.1007/s10459-022-10109-7 

Korakrit Imwattana1
1 Faculty of Medicine Siriraj Hospital, Mahidol University 



Background:
With the increased accessibility and versatility, educators started using online quizzes for formative and summative evaluations. However, the difficulty in the regulation (e.g. students may ask their friends or use unauthorized tools to help with the quiz) may result in the inaccuracy of the method. 


Summary of work:
At Siriraj Hospital. During the two-month Principles of Microbiology course, 5 online quizzes, containing random questions from a large pool of test items, were used as milestone tests to evaluate knowledge outcomes. These quizzes were loosely regulated (e.g. there was no time nor attempt limit and students could use any tools to help with the quiz). The students (n = 320) had to pass each quiz once and could revisit the quizzes as many times as they wanted, with immediate feedback after each attempt. At the end of the course, students took an on-site, well-regulated summative examination to formally evaluate their competence. 


Results:
There was a strong correlation between the online quiz and the summative scores (p = 0.013). Further analysis shows that students who performed well in the summative examination had the following characteristics; [1] they tried to pass the online quizzes in the early attempts (p < 0.001), [2] they continued to revisit the quizzes after passing (p = 0.011) and [3] tried further to improve their quiz scores (p = 0.002). 


Discussion and conclusion:
The results suggest that diligence may be more important than intelligence in Medical Microbiology; a helpful guide for future students. Regarding online quizzes, although they cannot yet replace traditional examinations, they can be used as milestone tests, and the careful monitoring of the quiz progression during the course can help identify students who may need extra attention. 


Take-home message:
Online quizzes can be used in addition to formal examinations to help monitor students. 



References (maximum three) 

1. Dobson JL. The use of formative online quizzes to enhance class preparation and scores on summative exams. Adv Physiol Educ. 2008 Dec;32(4):297-302. 

2. Irons A. Enhancing Learning Through Formative Assessment and Feedback. Abingdon, UK: Routledge; 2008. 

3. Brown GA, Bice MR, Shaw BS, Shaw I. Online quizzes promote inconsistent improvements on in-class test performance in introductory anatomy and physiology. Adv Physiol Educ. 2015 Jun;39(2):63-6. 

Jane Stanford1
1 Advanced Paediatric Life Support 



“Of course, not everything that counts can be counted. Furthermore, not all outcomes of importance can be measured. We should resist the temptation to focus on the measurable at the expense of the important.”1 

Although Advanced Paediatric Life Support (APLS) and other Structured Resuscitation Training (SRT) programs receive widespread professional endorsement, studies have shown limited and short-term change in clinicians’ knowledge, skills and behaviour. This could be because SRT outcomes (knowledge, skills and an approach to care) are measured in isolation, which is not how the content of these programs is applied in the clinical context. 

Script Concordance Tests (SCTs) have been validated as a measure of knowledge and clinical reasoning following clinical placement training programs. However, SCTs have not been validated as an assessment tool for SRT programs. 

My Masters project was a validation study of an SCT for the APLS program. Guided by the frameworks of Messick and Kane, the study created and piloted an APLS SCT to collect qualitative and quantitative data for a validation argument. Despite small numbers, psychometric analysis indicated that the APLS SCT as designed showed potential to behave in a similar manner to SCTs created for other contexts. Larger studies with APLS learners will be required to further validate the SCT for the APLS context. However, the preliminary work indicated positive results. 

Since this study and continued implementation of programmatic assessment designs in other forms of training, the role of the SCT as a tool supporting assessment for learning is worthy of review for development of clinical reasoning in this setting. 

Aside from reporting the results of this study, this presentation outlines the application of test validation processes, of which psychometric analysis is only one component. 




References (maximum three) 

1. Wilkinson TJ. Outcomes from educational interventions [online]. Focus on Health Professional Education: A Multi-disciplinary Journal. 2016;17(1):I-III. 

2. Lubarsky S, Charlin B, Cook DA, Chalk C, van der Vleuten CP. Script concordance testing: a review of published validity evidence. Medical Education. 2011;45(4):329-38. 

3. Cook DA, Kuper A, Hatala R, Ginsburg S. When Assessment Data Are Words: Validity Evidence for Qualitative Educational Assessments. Academic Medicine. 2016;91(10):1359- 69.