Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Test development and analysis strategies and COVID responses
E Poster
ePoster
11:00 am
27 February 2024
Exhibition Hall (Poster 2)
Session Program
11:00 am
Varanya Srisomsak1
Chantacha Sitticharoon1, Issarawan Keadkraichaiwat1 and Sunan Meethes2
1 Department of Physiology, Faculty of Medicine Siriraj Hospital, Mahidol University
2 Education department, Faculty of Medicine Siriraj Hospital, Mahidol University
Chantacha Sitticharoon1, Issarawan Keadkraichaiwat1 and Sunan Meethes2
1 Department of Physiology, Faculty of Medicine Siriraj Hospital, Mahidol University
2 Education department, Faculty of Medicine Siriraj Hospital, Mahidol University
Background:
Exam analysis has two main statistics: the difficulty index (p), and the discrimination index (r). Remarkable exam analyses are based on p-value<0.25 and/or r- value<0 (thresholds), but corrections may also be needed when the correct choice's p-value is comparable to other choices.
Exam analysis has two main statistics: the difficulty index (p), and the discrimination index (r). Remarkable exam analyses are based on p-value<0.25 and/or r- value<0 (thresholds), but corrections may also be needed when the correct choice's p-value is comparable to other choices.
Summary of Work:
The research investigated exam analyses of preclinical subjects (46 subjects, 237 exams) from academic year 2017-2022 to determine characteristics of abnormal exams.
The research investigated exam analyses of preclinical subjects (46 subjects, 237 exams) from academic year 2017-2022 to determine characteristics of abnormal exams.
Results:
Exam corrections were 34.18% (81/237), with a mean p-value of 0.701 (0-0.780) and a mean r-value of 0.284 ((-0.250)-0.440), resulting from multiple answers (46.91%), wrong answers (40.74%), question removal (4.94%), and awarding points to all choices (7.41%). Excluding question removal (no p-value and r-value available), corrections of p-value≥0.25 accounted for 19.48% of total corrections, 28.95% for multiple answers, and 12.12% for wrong answers. For r-value≥0, corrections were 49.35% of total corrections, 73.68% for multiple answers, and 66.67% for awarding points to all choices, and 18.18% for wrong answers. Regarding multiple answers, the first vs. second correct choices had p-values of 0.215 vs. 0.499, respectively, and r-values of 0.07 for both. For wrong answers, the incorrect vs. correct choices had p-values of 0.101 vs. 0.676, respectively, and r-values of -0.070 vs. 0.201, respectively.
Exam corrections were 34.18% (81/237), with a mean p-value of 0.701 (0-0.780) and a mean r-value of 0.284 ((-0.250)-0.440), resulting from multiple answers (46.91%), wrong answers (40.74%), question removal (4.94%), and awarding points to all choices (7.41%). Excluding question removal (no p-value and r-value available), corrections of p-value≥0.25 accounted for 19.48% of total corrections, 28.95% for multiple answers, and 12.12% for wrong answers. For r-value≥0, corrections were 49.35% of total corrections, 73.68% for multiple answers, and 66.67% for awarding points to all choices, and 18.18% for wrong answers. Regarding multiple answers, the first vs. second correct choices had p-values of 0.215 vs. 0.499, respectively, and r-values of 0.07 for both. For wrong answers, the incorrect vs. correct choices had p-values of 0.101 vs. 0.676, respectively, and r-values of -0.070 vs. 0.201, respectively.
Discussion:
The main reasons for exam corrections were multiple answers and wrong answers. Relying on p-value and r-value thresholds may lead to under-detection of abnormal exams. Additional correct choice(s) often had higher p-value than the correct choice.
The main reasons for exam corrections were multiple answers and wrong answers. Relying on p-value and r-value thresholds may lead to under-detection of abnormal exams. Additional correct choice(s) often had higher p-value than the correct choice.
Conclusion:
The use of thresholds resulted in 20-50% underdetection of abnormal exams, making it necessary to include correct choices' p-values comparable to other choices in a new guideline to improve evaluation accuracy.
Take Home Messages:
Using only p-value and r-value thresholds can under-detect abnormal exams. Monitoring exams where the correct choice's p-value is equal to or less than other choices might prevent inaccuracies.
The use of thresholds resulted in 20-50% underdetection of abnormal exams, making it necessary to include correct choices' p-values comparable to other choices in a new guideline to improve evaluation accuracy.
Take Home Messages:
Using only p-value and r-value thresholds can under-detect abnormal exams. Monitoring exams where the correct choice's p-value is equal to or less than other choices might prevent inaccuracies.
11:05 am
Darin Tangsittitum1
Peeradon Dutsadeevettakul2, Manasvin Onwan3 and Anongnard Kasorn4
1 Faculty of Medicine, Srinakharinwirot University
2 Faculty of Medicine Vajira Hospital, Navamindradhiraj University
3 Department of Preventive and Social Medicine, Faculty of Medicine, Sinakharinwirot University, Nakhon- Nayok, Thailand
4 Department of Basic Medical Science, Faculty of Medicine Vajira Hospital, Navamindradhiraj University, Bangkok, Thailand
Peeradon Dutsadeevettakul2, Manasvin Onwan3 and Anongnard Kasorn4
1 Faculty of Medicine, Srinakharinwirot University
2 Faculty of Medicine Vajira Hospital, Navamindradhiraj University
3 Department of Preventive and Social Medicine, Faculty of Medicine, Sinakharinwirot University, Nakhon- Nayok, Thailand
4 Department of Basic Medical Science, Faculty of Medicine Vajira Hospital, Navamindradhiraj University, Bangkok, Thailand
Background
Medical education in Thailand had been transitioned to primarily online programs throughout the past two years of the pandemic. Virtual lectures, lab simulations, and remote proctoring were carried out during this period. Returning to onsite learning brings new challenges and chances for medical education around the world to adjust to the new era once the new normal has started.
Summary Of Work
A mixed questionnaire consisted of qualitative and quantitative: open-ended and close-ended interviews were conducted among Thai medical students from two centers in January 2023 (N=105).
Summary Of Results
Students (69%) strongly believed that onsite learning was more effective than online learning. However, they agreed that the learning outcomes of these two different approaches were comparable. Most of the students (94%) accorded that active learning especially for practical lessons including labs, dissections, problem-based learning, and group discussions should be carried out in person.
In addition, more than 80% of the students stated that online lectures were convenient and could accommodate their preferred learning method. On the other hand, more than half of respondents stated that the transition from online to onsite learning was difficult in the post- COVID-19 era. Almost 80% of them struggled to cope with this transition on both mental and physical levels. Therefore, the majority of the students (86.67%) expressed a preference for a hybrid learning approach.
Discussion And Conclusion
Although the post-COVID-19 era has brought hope and excitement for a return to in-person classroom learning, medical schools can also implement flexible learning systems to suit their students’ preferences.
Take Home Messages
Adaptation of medical education is necessary for schools to provide flexible and personalized learning options that meet the unique needs and preferences of each student to provide them with the knowledge and skills needed for future careers and improve educational outcomes.
References (maximum three)
1. Lee BE, Zlotshewer BA, Mayeda RC, Kaplan LI. Impact of online-only instruction on preclinical medical education in the setting of COVID-19: Comparative analysis of online-only vs. Hybrid instructions on academic performance and mental wellbeing. Med Sci Educ [Internet]. 2022 [cited 2023 Aug 6];32(6):1367–74. Available from: https://pubmed.ncbi.nlm.nih.gov/36245945/
2. Hameed T, Husain M, Jain SK, Singh CB, Khan S. Online medical teaching in COVID-19 era: Experience and perception of undergraduate students. Maedica (Buchar) [Internet]. 2020 [cited 2023 Aug 6];15(4):440–4. Available from: https://pubmed.ncbi.nlm.nih.gov/33603900/
3. Hameed BZ, Tanidir Y, Naik N, Teoh JY-C, Shah M, Wroclawski ML, et al. Will “hybrid” meetings replace face-to-face meetings post COVID-19 era? Perceptions and views from the urological community. Urology [Internet]. 2021 [cited 2023 Aug 6];156:52–7. Available from: https://pubmed.ncbi.nlm.nih.gov/33561472/
11:10 am
Kent Hecker1
Richard Feinberg2, Fen Fan2, Courtney Vengrin3, Janine Hawley3 and Raja Subhiyah2
1 International Council for Veterinary Assessment/University of Calgary
2 NBME
3 International Council for Veterinary Assessment
Richard Feinberg2, Fen Fan2, Courtney Vengrin3, Janine Hawley3 and Raja Subhiyah2
1 International Council for Veterinary Assessment/University of Calgary
2 NBME
3 International Council for Veterinary Assessment
Background:
In health professions education, progress tests tend to be blueprinted for a specific milestone (e.g., competency at time of graduation) and used primarily for formative purposes to provide reinforcement-based learning within the cognitive domain over time (1). Defining a tests blueprint is a critical validity requirement in ensuring proper alignment between content and the intended construct. Traditionally, a test blueprint is informed by a practice analysis, but this process is often very time consuming, expensive, and includes content that may be out of scope for an assessment composed of multiple-choice questions. To address this problem, the National Board of Medical Examiners proposed a rapid blueprinting process (RBP) as a cost- effective and timely alternative (2,3).
In health professions education, progress tests tend to be blueprinted for a specific milestone (e.g., competency at time of graduation) and used primarily for formative purposes to provide reinforcement-based learning within the cognitive domain over time (1). Defining a tests blueprint is a critical validity requirement in ensuring proper alignment between content and the intended construct. Traditionally, a test blueprint is informed by a practice analysis, but this process is often very time consuming, expensive, and includes content that may be out of scope for an assessment composed of multiple-choice questions. To address this problem, the National Board of Medical Examiners proposed a rapid blueprinting process (RBP) as a cost- effective and timely alternative (2,3).
Summary:
The RBP consists of two phases. During phase 1 an expert panel representing various veterinary stakeholder groups participated in a 2-day meeting to create a draft blueprint by discussing expected competencies, outlining major content areas, assigning weights, and defining subtopics relevant for the progress test. During phase 2 the draft blueprint was packaged within a survey and communicated to a broader stakeholder community to review and suggest minor modifications.
The RBP consists of two phases. During phase 1 an expert panel representing various veterinary stakeholder groups participated in a 2-day meeting to create a draft blueprint by discussing expected competencies, outlining major content areas, assigning weights, and defining subtopics relevant for the progress test. During phase 2 the draft blueprint was packaged within a survey and communicated to a broader stakeholder community to review and suggest minor modifications.
Results:
In Phase 1 the expert panel identified 5 topic areas and weights. One hundred and thirty seven stakeholders covering different roles and primary areas of focus were then surveyed. 57% responded. 71% agreed that the content domains were appropriate and while 50.6% felt the weightings should be changed the mean weights provide by survey respondents were within 1 or 2 percentage points of what was proposed.
In Phase 1 the expert panel identified 5 topic areas and weights. One hundred and thirty seven stakeholders covering different roles and primary areas of focus were then surveyed. 57% responded. 71% agreed that the content domains were appropriate and while 50.6% felt the weightings should be changed the mean weights provide by survey respondents were within 1 or 2 percentage points of what was proposed.
Discussion:
Rapid blueprinting provided a practical framework to identify content areas and their relative importance from across academic veterinary medicine.
Rapid blueprinting provided a practical framework to identify content areas and their relative importance from across academic veterinary medicine.
Conclusion:
Rapid blueprinting provided a systematic and practical framework to collect validity evidence that can be applied to progress tests and other lower-stakes assessments.
Rapid blueprinting provided a systematic and practical framework to collect validity evidence that can be applied to progress tests and other lower-stakes assessments.
References (maximum three)
1.Dion V, St-Onge C, Bartman I, Touchie C, Pugh D. Written-Based Progress Testing: A Scoping Review. Acad Med. 2022 May 1;97(5):747-757. doi: 10.1097/ACM.0000000000004507. Epub 2022 Apr 27. PMID: 34753858.
2.Clauser, A.; Subhiyah, R., Martin, D. F.; Guernsey, J. A Fresh Perspective: Examination Blueprint Development. 2017. ABMS Conference.
3.Subhiyah, R., Clauser, A.; Park, Y. S.; Martin, D. F.; and Labovitz, A. J. Rapid Blueprinting: An Efficient and Effective Method for Designing Content of Assessments. (Submitted to NEJM)