Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Workplace-based Assessments
Oral Presentation
Oral Presentation
4:00 pm
27 February 2024
M211
Session Program
4:00 pm
Sandika Baboolal1
VS Singaram1
1 University of KwaZulu-Natal
VS Singaram1
1 University of KwaZulu-Natal
Background
Surgical training globally is facing crucial challenges with evolving demands. This study investigated the use and effectiveness of WBAs and their impact on training, feedback, and perioperative teaching in surgical training programs.
Summary
A mixed methods cross-sectional, national electronic survey was conducted with surgical trainees and consultant trainers from all eight major surgical training universities across eleven surgical disciplines in South Africa. 108 surgical trainees and 41 supervising consultant trainers from 11 surgical disciplines across eight surgical training universities responded to the survey.
Results
The most significant educational gap identified by all the respondents was indequate perioperative feedback. A third of the respondents were currently using WBAs. Within the group using WBA, these trainees and trainers had a better rating for the general quality of surgical feedback than those who did not use WBAs (p=0.02). They also had a better rating for the general quality of feedback given to trainees on their skills and competence (p=0.04) and a higher rating for trainee supervision (p=0.01) and the specialist training program overall (p=0.01). The group using WBA also had a higher rating for the assessment of competencies such as the trainee as an effective communicator(p<0.01) and collaborator (p=0.04).
Discussion and Conclusion:
We found that the use of WBAs enhanced the quality and effectiveness of feedback in surgical training programs. We also found that the use of WBA enhances perioperative teaching and learning and improves the assessment of relational competencies. This was also associated with high ratings for the quality of trainee supervision. Faculty and trainee development, enhancement of the trainee-trainer relationship, and integrating iterative stakeholder feedback could help realize the full potential of WBAs to augment surgical training across disciplines.
Take home message
WBAs enhances perioperative teaching and learning and improves the quality and effectiveness of feedback between trainees and trainers.
4:15 pm
Hedva Chiu1
Timothy Wood1, Adam Garber1, Wade Gofton1, Samantha Halman1, Janelle Rekman1 and Nancy Dudek1
1 The University of Ottawa
Timothy Wood1, Adam Garber1, Wade Gofton1, Samantha Halman1, Janelle Rekman1 and Nancy Dudek1
1 The University of Ottawa
Background:
Workplace-based assessment (WBA) is a recognized assessment method for competence in post-graduate medical education.1,2 Most WBA relies on physician supervisors. However, in a complex training environment where supervisors are unavailable to observe certain aspects of a trainee’s performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses’ assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopaedic Surgery. However, different clinical settings can impact a tool’s performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa (UO).
Workplace-based assessment (WBA) is a recognized assessment method for competence in post-graduate medical education.1,2 Most WBA relies on physician supervisors. However, in a complex training environment where supervisors are unavailable to observe certain aspects of a trainee’s performance, nurses are well-positioned to do so. The Ottawa Resident Observation Form for Nurses (O-RON) was developed to capture nurses’ assessment of trainee performance and results have demonstrated strong evidence for validity in Orthopaedic Surgery. However, different clinical settings can impact a tool’s performance. This project studied the use of the O-RON in three different specialties at the University of Ottawa (UO).
Summary of work:
O-RON forms were distributed on the Internal Medicine, General Surgery, and Obstetrical wards at UO over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed.
O-RON forms were distributed on the Internal Medicine, General Surgery, and Obstetrical wards at UO over nine months. Validity evidence related to quantitative data was collected. Exit interviews with nurse managers were performed and content was thematically analyzed.
Results:
179 O-RONs were completed on 30 residents. With four forms per resident, the ORON’s reliability was 0.82. Global judgement response and frequency of concerns was correlated (r = 0.627, P<0.001). Exit interviews identified factors impacting form completion, which included heavy clinical workloads and larger volumes of residents.
179 O-RONs were completed on 30 residents. With four forms per resident, the ORON’s reliability was 0.82. Global judgement response and frequency of concerns was correlated (r = 0.627, P<0.001). Exit interviews identified factors impacting form completion, which included heavy clinical workloads and larger volumes of residents.
Discussion:
Consistent with the original study, the findings demonstrated strong evidence for validity. However, the total number of forms collected was less than expected. This appears due to environmental factors.
Consistent with the original study, the findings demonstrated strong evidence for validity. However, the total number of forms collected was less than expected. This appears due to environmental factors.
Conclusion:
The O-RON is a useful tool to capture nurses’ assessment of trainee performance and demonstrated reliable results in various clinical settings. However, understanding the assessment environment and ensuring it has the capacity to perform this assessment is crucial for successful implementation.
The O-RON is a useful tool to capture nurses’ assessment of trainee performance and demonstrated reliable results in various clinical settings. However, understanding the assessment environment and ensuring it has the capacity to perform this assessment is crucial for successful implementation.
Implications for future research:
Input from nurses on resident performance is valuable and the O-RON captures this assessment. Future research should focus on how we can create conditions whereby implementing this tool is feasible from the perspective of nurses.
Input from nurses on resident performance is valuable and the O-RON captures this assessment. Future research should focus on how we can create conditions whereby implementing this tool is feasible from the perspective of nurses.
References (maximum three)
1. Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287(2):226-235. doi:10.1001/jama.287.2.226
2. Prentice S, Benson J, Kirkpatrick E, Schuwirth L. Workplace-based assessments in postgraduate medical education: A hermeneutic review. Medical Education. 2020;54(11):981- 992. doi:10.1111/medu.14221
4:30 pm
Jung G Kim1
Lindsay Mazotti2, Eric Holmboe3 and Michael Kanter2
1 NYU Grossman School of Medicine
2 Kaiser Permanente Bernard J. Tyson School of Medicine
3 ACGME
Lindsay Mazotti2, Eric Holmboe3 and Michael Kanter2
1 NYU Grossman School of Medicine
2 Kaiser Permanente Bernard J. Tyson School of Medicine
3 ACGME
Clinical performance measures to assess residents’ quality of care for patients is a U.S. national accreditation requirement but under-utilized.1,2 Publicly reported measures on practicing physician performance used in U.S. national reporting are key to improving care transparency and public accountability.2-3
We examined the variation of primary care residents’ quality of care and to identify predictors for resident extreme high and low performance. Eight U.S. accredited residency programs training 682 family medicine and internal medicine residents during their three-year ambulatory care training were examined between July 2014-June 2019. Performance for residents, attending physicians at the same medical center, and national point estimates were analyzed using the NCQA's Healthcare Effectiveness Data and Information Set (HEDIS) publicly reported quality of care measures: prevention cancer screenings, blood pressure control, and monitoring of patients on persistent medications.
Resident performance for 17,771 patients differed across training years for Annual Monitoring for Patients on Persistent Medications, Cervical Cancer Screening, and Colorectal Cancer Screening. Resident performance was lower than attending-levels for all HEDIS measures but higher than the national average on all metrics except Annual Monitoring for Patients on Persistent Medications. Variation of performance generally narrowed as residents progressed in training years. When comparing resident-level to attending-level variation, attendings had notably narrower variation of performance. Residents caring for more patients were associated with lower odds (~6%, p<.001) for extreme lower performance.
Examining resident performance over their entire residency training is important in examining practice patterns, guiding training design, and improving transparency to patients of their quality of care. Using publicly reported quality measures may be a practical step to garner national insights in providing high quality of care to the public.
Further study of publicly reported quality measures may be a practical step for insights about the care residents deliver and support the public accountability of residency training.
References (maximum three)
1. ACGME. ACGME Common Program Requirements (Residency). Published online July 1, 2020:55.
2. National Academies of Sciences, Engineering, and Medicine, Health and Medicine Division. Graduate Medical Education Outcomes and Metrics: Proceedings of a Workshop. (Martin P, Zindel M, Nass S, eds.). National Academies Press; 2018. doi:10.17226/25003
3. Kim JG, Rodriguez HP, Holmboe ES, et al. The Reliability of Graduate Medical Education Quality of Care Clinical Performance Measures. Journal of Graduate Medical Education. 2022;15(3).