Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Assessment tools and instruments
Oral Presentation
Oral Presentation
10:00 am
28 February 2024
M206
Session Program
10:00 am
Konstantin Brass1
1 Institute for Communication and Assessment Research
1 Institute for Communication and Assessment Research
Background:
In order to face challenges in the continuously changing medical assessment culture, institutions have to cooperate more intensively. On this target, the Umbrella Consortium for Assessment Networks (UCAN) was formed 18 years ago and is meanwhile carried by the non- profit Institute for Communication and Assessment Research. UCAN is an academic association which consists of a total of 95 faculties, professional societies, medical associations and other institutions from eight different countries, which support each other through optimizing of the following resources: (1) network, (2) development, (3) standardisation, (4) quality assurance, (5) exchange and (6) research.
Summary of work:
Over the last 18 years, UCAN has successfully developed a comprehensive portfolio of 14 different assessment tools to cover the entire examination workflow. In collaboration with our partner institutions, UCAN has been developing standards, workflows and tools for the planning, preparation, implementation, evaluation, quality assurance and feedback of examinations.
Summary of results:
So far, over 16,100 colleagues added more than 930,000 items to a common platform for authoring, sharing and reviewing items and exams. Best practice examples for reliable exams, assessment contents and workflows are collected and implemented at the partner institutions. New formats for item and exam formats are continuously developed. To date, over 15 million students were successfully assessed in 54,000 exams.
Discussion and Conclusions:
Especially with the upcoming conceptual, logistic and developmental challenges associated with the shift from knowledge to competency-based assessment, tie-ups are highly
recommendable. 18 years of cooperation in a collaborative network has proven to be an efficient way to face challenges in medical assessment.
Take-home message:
Assessment institutions should work together in order to tackle common challenges. 18 years of successful cooperation at UCAN proves this approach to be both innovative and feasible.
10:15 am
Luan Au1
My Do1 and Hien Nguyen1
1 University of Medicine and Pharmacy at Hochiminh City (UMP)
My Do1 and Hien Nguyen1
1 University of Medicine and Pharmacy at Hochiminh City (UMP)
The Readiness Assurance Test (RAT) is the icon of Team-Based Learning (TBL). Most educators consider RAT is an assessment for learning. The RATs contribute to learners' self- awareness, promote self-learning and re-direct learning processes. Evidence demonstrates that a relevant RAT enhances in-class activity effectiveness, and an irrelevant RAT shoots the entire TBL process down. Hence, crafting relevant RAT is a primary requirement for ensuring TBL effectiveness. Unfortunately, some educators have brought the same way as writing the multiple-choice questions test (MCQs) for composing RAT, consequently making the RAT irrelevant.
This paper aims to summarize prominent issues influencing the relevance of readiness assurance processes, compare the current practices of writing MCQ with crafting RAT and suggest specific practical points to practitioners for making the RAT more relevant.
During crafting the RAT, educators should consider respecting the basic technical rules for writing MCQs, aligning the RAT blueprint with the course ELOs, and ensuring that materials elucidate the tested concept. Those are the three general requirements influencing the validity of the RAT. Technically, educators should consider keeping the number of items reasonable, testing concepts rather than assessing separate ideas, focusing on frequently misunderstood concepts and keeping the number of difficult items reasonable. Respecting that allows for saving time and using gained time to achieve teaching purposes. Targeting high-order thinking MCQs, using aggregate, and testing subjects of debates might negatively impact the relevance of RAT, so educators should avoid it.
References (maximum three)
European Board for Medical Assessor. 2017. EBMA guidelines for writing multiple-choice questions. Available from:https://www.ebma.eu/wp-content/uploads/2019/02/EBMA- guidelines-for-item-writing-version-2017_3.pdf
Ngoc PN, Cheng CL, Lin YK. 2020. A meta-analysis on students’ readiness assurance test performance with Team-Based Learning. BMC Medical Education 20:223
Parmelee DX, Hudes P. 2012. Team-based learning: A relevant strategy in health professionals’ education. Med Teach34(5):411-13
10:30 am
Alla El-Awaisi
Myriam Jaam1, Kyle Wilby and Kerry Wilbur
1 Qatar University
Myriam Jaam1, Kyle Wilby and Kerry Wilbur
1 Qatar University
Despite the growing emphasis on interprofessional education (IPE) within health profession programs and its potential value for enhancing student learning, there seems to be a lack of substantial evidence guiding the authentic and accurate evaluation of student learning outcomes. This includes the translation of assessment data into meaningful scores and grades. Given the rising significance of incorporating reflection and simulation into IPE, this systematic review aimed to systematically identify, evaluate, and synthesize existing literature that uses reflection and simulation as summative assessment tools to evaluate student outcomes following IPE activities. A total of 1,758 articles were identified in the searched databases. Five articles were included in this systematic review of marginal quality that could highlight the limited rigorous use of either reflection or simulation for summative assessment purposes. This review has identified a need for summative IPE assessments alongside formative assessments. Moreover, it is crucial to provide training not only to faculty assessors, enhancing their competence and ensuring reproducibility of results, but also to students. Equipping students with the essential knowledge, critical thinking capabilities, and mindset required to evolve into reflective practitioners adept at interprofessional collaboration is pivotal. The assessment of IPE effectiveness remains a complex challenge, and a conspicuous gap exists within the current literature that necessitates further research and expansion.
References (maximum three)
Rogers, G. D., Thistlethwaite, J. E., Anderson, E. S., Abrandt Dahlgren, M., Grymonpre, R. E., Moran, M., & Samarasekera, D. D. (2017). International consensus statement on the assessment of interprofessional learning outcomes. Medical Teacher, 39(4), 347–359. https://doi.org/10.1080/0142159X.2017.1270441
Anderson, E., Smith, R., & Hammick, M. (2016). Evaluating an interprofessional education curriculum: A theory-informed approach. Medical Teacher, 38(4), 385–394. https://doi.org/10.3109/0142159X.2015.1047756
Barr, H., Gray, R., Helme, M., Low, H., & Reeves, S. (2016). Interprofessional education guidelines 2016. Centre for the Advancement of Interprofessional Education.
10:45 am
Sandra Ramos1
Nicole Flemming1, Vinod Gopalan1 and Pavla Simerska1
1 Griffith University
Nicole Flemming1, Vinod Gopalan1 and Pavla Simerska1
1 Griffith University
Background
Team-based learning (TBL) provides an active, structured form of small group learning, including individual and team learning and immediate feedback 1. Pre-TBL preparation includes readings and other teaching activities. During TBL, students take the Individual Readiness Assurance Test (iRAT), and then within groups, students complete the same assessment task via the Team Readiness Assurance Test (TRAT). The iRAT and tRAT integrated TBL has shown to assist students in functioning as a team and building interpersonal communication skills 2. However, there is inconsistency among different Universities regarding the frequency and weighting of these assessments.
Summary of work
We aimed to evaluate how the frequency and weighting of iRATs and tRATs affected the performance of medical students. In Trimester 1, 2023, 225 medical students from the Griffith University MD program completed weekly TBL assessments with a 5 + 5% weighting, ensuring individual accountability to themselves and their peers. In Trimester 2, the same format of assessments with half weighting (5%) was implemented fortnightly.
Results
Preliminary results indicated that students’ performance decreased significantly when the frequency and weighting were reduced. With a two-tailed P value of 0.0001, the mean results of iRATs for Trimesters 1 and 2 were 81% and 61%, respectively.
Discussion
The effectiveness of TBL is dependent on a student's preparation to participate in discussions with their peers. We hypothesise that there is a decline in preparation prior to iRATs, which directly impacts iRAT and tRAT results. This is attributed to decreased weight and frequency of these assessments.
Conclusions
Weekly TBL assessments, accounting for 10% of students’ grades, have improved students’ preparation and collaboration in TBL sessions.
Take-home messages / implications for further research or practice
The frequency and weight of TBL assessments have impacted students’ preparation before TBL sessions, consequently impacting their performance in their assessments.
References (maximum three)
1 - Parmelee D, Michaelsen LK, Cook S, Hudes PD. Team-based learning: a practical guide: AMEE guide no 65. Med Teach. 2012;34:e275–87.
2 - Burgess, A., van Diggele, C., Roberts, C. et al. (2020) Team-based learning: design, facilitation and participation. BMC Med Educ 20 (Suppl 2), 461. https://doi.org/10.1186/s12909-020-02287-y