Skip to main content
Ottawa 2024
Times are shown in your local time zone GMT

Workplace matters and programmatic approaches

E Poster

ePoster

2:30 pm

26 February 2024

Exhibition Hall (Poster 2)

Session Program

Debra Klamen1
1 Southern Illinois University School of Medicine 



Programmatic assessment as used in undergraduate medical education curricula has been around for over 10 years (1). The Southern Illinois University School of Medicine in Illinois adopted this methodology in 2020-2021. There have been the inevitable growing pains with the use of programmatic assessment, especially with the notion of movement from assessment OF learning to assessment FOR learning. One unexpected, but positive result of this move in assessment has come from comments from the Associate Dean for Diversity, Equity and Inclusion, as well as numerous students who are under-represented in medicine in the Unites States. All working under the new system have reported a significant decrease in stereotype threatastheyworktheirwaythroughthemedicalschool'scurriculum. Duringthepresentation, details of this finding will be discussed. 

Stereotype threat (2) refers to the phenomenon where individuals belonging to a certain social group experience anxiety or fear of confirming negative stereo types associated with their group. This threat arises from the awareness that one's behavior or performance may be judged through the lens of these stereotypes, leading to self-doubt and underperformance. The impact of stereotype threat can be significant, reinforcing existing inequalities and limiting opportunities for individuals to excel. 


References (maximum three) 

Lambert WT Schuwirth and Cees P M Van Der Vleuten. Programmatic assessment: From assessment of learning to assessment for learning. Medical Teacher 2011;33: 478-485 

Nguyen, H. H. D., & Ryan, A. M. (2008). Does stereotype threat affect test performance of minorities and women? A meta-analysis of experimental evidence. Journal of applied psychology, 93(6), 1314. 

Mohammed A. Alqarni1
Ahmad Gazawni2, Bayan Kharsan3 and Mohammed ALzamzami4
1 College of Medicine, Sulaiman Al Rajhi University. Saudi Arabia. 2 Ministery of health, Saudi Arabia.
3 Al- Noor Specialist Hospital, Makkah, Saudi Arabia.
4 King Fahad Medical City, Riyadh, Saudi Arabia.




Background:
Medical education in Saudi Arabia is undergoing a significant transformation, shifting towards Competency-Based Medical Education (CBME), enforcing the engagement of schools, faculty and students in community service as part of social responsibility. However, service learning needs to be more utilized and assessing student development during activities poses challenges. Embedding service learning within medical education offers a unique opportunity to develop skills and competencies that are often less emphasized in conventional curricula. This study identified core competencies and evaluation methods for medical service learning. 


Summary of work:
A focus group of medical educators and clinicians discussed competencies best suited for service learning. A literature review of service learning and competency frameworks supplemented the discussion. Six core competencies were identified: clinical reasoning, technical skills, communication, professionalism, teamwork, and situational awareness. Rubrics assessed competencies through reflections, observations and supervisor evaluations. 


Results and Discussion:
A national health volunteer program engaged 1000 medical volunteers to apply competencies. The mini-CEX and portfolio reflection were the assessment methods to capture student progress. The authors are working on evaluating the experience, and preliminary results are under review—growth in communication, professionalism, and empathy were captured. Clinical reasoning skills developed more variably and required program adjustment. Ongoing data collection will further refine the assessment tools. Conclusion: Consensus amongst medical educators enabled a competency-based assessment model for service learning targeting underemphasized skills—competencies shaped learning activities and assessments, showing promise for better learners' preparedness. It equips students with essential skills, fosters empathy and social responsibility, and prepares them to be well- rounded and compassionate healthcare professionals. Service learning bridges the gap between classroom learning and real-world healthcare practice. 


Take-home messages/implications for further research or practice: 
Implications include integrating service learning earlier in training and researching long-term impacts on competency attainment. The following steps include validating rubrics and examining competency progression over time. 



References (maximum three) 

Stewart, T. and Wubbena, Z.C., 2015. A systematic review of service-learning in medical education: 1998–2012. Teaching and Learning in Medicine, 27(2), pp.115-122. 

Hunt, J.B., Bonham, C. and Jones, L., 2011. Understanding the goals of service learning and community-based medical education: a systematic review. Academic Medicine, 86(2), pp.246- 251. 

Touchie, C. and ten Cate, O., 2016. The promise, perils, problems and progress of competency‐based medical education. Medical education, 50(1), pp.93-100. 

Peter Tzakas1
Aurthi Muthukumaran1 and Britton Sprules 
1 University of Toronto 



Competency-based education has gained significant traction in medical education as a means to ensure that future healthcare professionals possess the necessary skills and knowledge for clinical practice. One effective strategy for implementing competency-based education is the formation of a Competency Committee. 

Forming a Competency Committee in medical education entails several valuable steps. Establishing clear goals and objectives is paramount. Defining the committee's purpose, and responsibilities helps align its efforts with the overarching mission of the program. 

Another crucial lesson is the importance of stakeholder engagement by involving faculty, program directors, and members of the committee. Regular communication and solicitation of feedback from these stakeholders ensure that the competency framework remains relevant, responsive, and reflective of real-world clinical needs. 

Effective operations of the meetings include having the right people involved to review student performance, retrieving and having data in easy-to-view formats, and including an orientation on the meaning of the data. 

The conclusion of each meeting is important with discussing the possible designations for each student, following up on items, and closing the loop by ensuring students receive a summary of their status. 

With such important processes, it is important to always include a multi-pronged approach to gauge feedback to constantly improve the process. 


2. Why is the topic important? 
Challenges in forming a Competency Committee often arise from the need for adequate resources and infrastructure. Resource allocation for faculty time, training, and technology is essential for the successful implementation of competency-based assessments. Integrating the competency framework into existing curricula and evaluation systems requires careful planning and coordination to avoid redundancy and maximize efficiency. 

The evaluation of the Competency Committee itself is an important lesson learned. Regular assessment of the committee's effectiveness—including its impact on student learning 

outcomes and overall program quality—allows for adjustments and improvements. Feedback mechanisms, such as surveys or focus groups, can provide valuable insights into the committee's strengths and areas for development. 

The formation of a Competency Committee in medical education has proven to be a valuable strategy for enhancing the assessment and training of future healthcare professionals. The lessons learned from successful implementations underscore the significance of clear objectives, stakeholder engagement, resource allocation, flexibility, and ongoing evaluation. By incorporating these lessons, medical education programs can ensure that graduates are competent and well-prepared to meet the complex challenges of modern healthcare practice. 


3. Symposium format, including participant engagement methods 
  • 10mins intro and intro on CC 

  • 15mins small group check-in on the school's use of competency committees 

  • 15mins debrief in a large group 

  • 15mins overview of our CC 

  • 15mins small group brainstorming on aspects of what would make a good CC and what needs to be monitored 

  • 10mins debrief in a larger group 

  • 10mins wrap up 


4. Take-home messages/symposium outcomes/implications for further research and/or practice 
  • Understanding the importance and goals of the competency committee 

  • Ensuring to consider multiple important factors to operationalize your committee 

  • Establish a QI process to review your committee 



References (maximum three) 

  • 1. Kinnear B, Warm EJ, Hauer KE. Twelve tips to maximize the value of a clinical competency committee in postgraduate medical education. Med Teach. 2018;40(11):1110-1115. 

  • 2. Ekpenyong A, Padmore JS, Hauer KE. The Purpose, Structure, and Process of Clinical Competency Committees: Guidance for Members and Program Directors. J Grad Med Educ. 2021;13(2 Suppl):45-50. doi:10.4300/JGME-D-20-00841.1 

  • 3. Goldhamer MEJ, Martinez-Lage M, Black-Schaffer WS, et al. Reimagining the Clinical Competency Committee to Enhance Education and Prepare for Competency-Based Time- Variable Advancement. J Gen Intern Med. 2022;37(9):2280-2290. 

Louisa Ng1
Jacob Kuek1, Simone Elliot1 and Kate Reid1
1 University Of Melbourne 



Background 
Workplace-based assessments (WBA) were introduced into medical training a few decades ago in recognition of the relationship between assessment and learning and in response to concerns about the workplace-based training of doctors. The increasing use of WBA has highlighted the importance of strategies for successful and sustainable implementation of such assessment. With the introduction of any novel assessments, user acceptability and engagement is key to continued success. 

At the University of Melbourne (UoM), two broad categories of WBAs were developed for the final year students (approx. 350 in a cohort) in 2022 – global perception WBA and skill-based WBA, based on the anticipated introduction of Entrustable Professional Activities for junior doctors by the Australian Medical Council. For successful and sustainable implementation of the WBAs, it is important to assess stakeholder experience of the WBAs – these stakeholders include the students themselves but also supervising staff. 


Methods 
Final year UoM medical students and supervisory staff members will be recruited for this mixed-methodsstudy. Anonlinesurveywillbesenttoallfinalyearmedicalstudentstocollect quantitative data and free-text comments on students’ experience of WBA. Survey questions will be derived from similar studies and from past UoM evaluations. 

Semi-structured interviews will be conducted with final year students (n=10-15) and staff (n=10-15). Participants will be invited to discuss their perceptions of WBA and the impact of their experience on student learning. Interviews with individuals will be continued until data saturation occurs. The experience of WBA will be explored through thematic analysis. 


Results and Discussion 
This study is currently awaiting ethics approval and is anticipated to occur in the second half of 2023. Results will be presented at the Ottawa conference. 



References (maximum three) 

Norcini, J., & Burch, V. (2007). Workplace-based assessment as an educational tool: AMEE Guide No. 31. In Medical Teacher (Vol. 29, Issues 9–10, pp. 855–871). https://doi.org/10.1080/01421590701775453 

Massie, J., & Ali, J. M. (2016). Workplace-based assessment: a review of user perceptions and strategies to address the identified shortcomings. In Advances in Health Sciences Education (Vol. 21, Issue 2, pp. 455–473). Springer Netherlands. https://doi.org/10.1007/s10459-015- 9614-0 

Thun How Ong1,2
Jason Chang1,2, Sok Hong Goh1, Limin Wijaya1,2, Ling Zhu1,2 and Deanna Lee1,2
1 Duke-NUS Graduate Medical School
2 Singhealth



Background
A perennial issue with workplace based assessment (WBA) is failure to identify students or trainees who are not meeting standards (Yepes-Rios et al., 2016). Frame of reference training can help align faculty expectations and improve grading consistency (Cook et al., 2009). However, there may be other reasons why faculty fail to flag such students (Kogan et al., 2011). 


Methods
Clinical faculty who were performing mini-CEX for students were asked to consider a student they had previously assessed who was not meeting standards. The current form asks faculty to assess if students need further development, meet expectations, or exceed expectations for their level of training. They were asked how they thought they and their peers would have graded the student. 


Results
28/140 faculty who had just completed a mini-CEX for the students answered the survey. 22/28 indicated they thought they were assessing students well or very well. A large proportion thought they themselves (46.2%) or their peers ( 42.3%) of faculty would have graded the weak student as meet expectations. 


Discussion
A large proportion of our faculty passed students in their mini-CEX even though they internally assessed that the students had not met expectations, and expected that many fellow faculty would also do so. Despite this dichotomy, faculty perceived that they were assessing students well.Thissuggeststhat gradinggiven may nottrulyreflecttheobservedassessmentandthere may be cultural norms influencing failure to fail. 


Conclusion and take-home message
Faculty may be unwilling to mark out students who are performing poorly even though they are able to identify the poor performance. Faculty training needs to go beyond frame of reference training to educate faculty on the intent and implications of the assessment and to address a cultural reluctance to “fail” students. 


References (maximum three) 

Cook, D. A., Dupras, D. M., Beckman, T. J., Thomas, K. G., & Pankratz, V. S. (2009, Jan). Effect of rater training on reliability and accuracy of mini-CEX scores: a randomized, controlled trial. J Gen Intern Med, 24(1), 74-79. https://doi.org/10.1007/s11606-008-0842-3 

Kogan, J. R., Conforti, L., Bernabeo, E., Lobst, W., & Holmboe, E. (2011, Oct). Opening the black box of clinical skills assessment via observation: a conceptual model. Med Educ, 45(10), 1048-1060. https://doi.org/10.1111/j.1365-2923.2011.04025.x 

Yepes-Rios, M., Dudek, N., Duboyce, R., Curtis, J., Allard, R. J., & Varpio, L. (2016, Nov). The failure to fail underperforming trainees in health professions education: A BEME systematic review: BEME Guide No. 42. Medical Teacher, 38(11), 1092-1099. https://doi.org/10.1080/0142159x.2016.1215414 

Robert Bryce1
1 Royal Australian and New Zealand College of Obstetricians and Gynaecologists 



Background
Until 2023, RANZCOG's only WBA was 6-monthly summative assessments presented by the trainees' supervisors and DOPS assessed by a limited number of consultants. Consequently, the general fellowship is naive to structured WBA or principles of effective feedback. 

RANZCOG plans the introduction of more WBA in the form of mini-CEX (2023), MSF (2024) and CbD (2025 tbc). However, it is not prepared to move to "formal" programmatic assessment until there is confidence in the validity and reliability of our WBA. 

For mini-CEX, at least 12 assessments by 6 different assessors (Norcini & Burch, 2007); and, for MSF, 3 assessments by at least 5 assessors (Moonen-van Loon, 2013) would be required to provide acceptable reliability in their assessment components. 

Impediments to acceptance of WBA have been previously identified, especially lack of available time, trainer knowledge and understanding of WBA and trainer and trainee perceptions of the value of WBA to training (Massie & Ali, 2015). Suggested solutions were clarifying the purpose of WBA, managing the problem of time and addressing the quality of feedback provided. 

RANZCOG predicts that, if managed carefully, initial pushback against the commitment required by WBA can be minimised and that our committed fellowship will eventually find it engaging and empowering. 



Approach
RANZCOG has embarked on a pragmatic approach to the introduction of WBA by: 

A staged introduction of one new WBA per year 

A simplification of the instruments to make them more efficient and accessible to the fellowship 

Only requiring the numbers of assessments and assessors that are considered achievable Focussing on the feedback components over the assessment components 

Education of trainees and fellows of the nature, purpose and value of WBA Education of fellows and trainees in effective feedback. 



References (maximum three) 

Norcini j & Burch v. Workplace-based assessment as an educational tool. AAME Guide No.31. Med Teach. 2007; 29 (9): 1087-1102 

Moonen-van Loon LM, Overeem L, Donkers HH et al. Composite reliability of a workplace- based assessment toolbox for postgraduate medical education. Adv Health Sci Educ Theory Pract. 2013; 18: 1087-1102 

Massie J & Ali JM. Workplace-based assessment: a review of user perceptions and strategies to address the identified shortcomings. Adv Health Sci Edu DOI 10.1007/s10459-015-9614-0 

Sean Hogan1
Kenji Yamazaki2, Steve Bandeian3 and Eric Holmboe1
1 Accreditation Council for Graduate Medical Education
2 Elevance Health, formerly Anthem




Background:
Upon graduating residency, Family Medicine (FM) and Internal Medicine (IM) physicians become responsible for coordinating care for patients with chronic diseases like type-2 diabetes (DM-2). The US Centers for Disease Control and Prevention recommends biannual HbA1c tests and yearly screening for retinopathy, and nephropathy for DM-2 patients[1]. The extent to which recent graduates adhere to clinical guidelines is unknown. By observing graduates in early unsupervised practice, we advance knowledge that quality measures can give programs feedback about residents’ disease management[2]. 


Summary of work:
We report guideline adherence for patients of recent residency graduates. We identified physicians who graduated residency in 2016. We use a private claims database to determine the percentage of DM-2 patients who received tests for HbA1c, nephropathy, and retinal function in 2017. Our database includes the US’ largest provider network, insuring ~1/3 Americans. Responsibility for these care processes was attributed to the physician who provided the plurality of outpatient evaluation visits.[3] Physicians who saw ≥5 DM-2 patients are included. 


Results:
Of 3887 recent IM and FM graduates, 1966 saw ≥5 DM-2 patients (range 5-266 patients/physician) aged 18-75. For Hba1C: 18% of recent graduates screened 100% of their DM-2 patients, and 83% of doctors screened ≥50% of patients, ~5% screened none. Nephropathy: only 11% of doctors screened all patients and 88% screened ≥50% of patients, and <1% screened none. Retinopathy: only 0.1% of doctors screened all patients and 15% of doctors screened ≥50% of patients, and 5% screened none. 


Discussion:
Compliance with DM-2 screening guidelines varies substantially among patients of primary-care physicians who recently completed residency. 


Conclusions:
Residency programs may need to better prepare their graduates for chronic- disease management. 


Implications for further research or practice:
Insurance claims reveal wide variation in recommended diabetes screenings for patients of recent residency graduates. Possible sources of variation require further study. 



References (maximum three) 

1. https://www.cdc.gov/diabetes/managing/care-schedule.html
 2. Kim JG, et al. The reliability of graduate medical education quality of care clinical performance measures. J Grad Med Educ. 2022 Jun;14(3):281-288. doi: 10.4300/JGME-D-21- 00706.1. Epub 2022 Jun 13. PMID: 35754636; PMCID: PMC9200256.
 3. Nyweide DJ, et al. Relationship of primary care physicians' patient caseload with measurement of quality and cost performance. JAMA. 2009 Dec 9;302(22):2444-50. doi: 10.1001/jama.2009.1810. PMID: 19996399; PMCID: PMC2811529 

Muirne Spooner1
1 Royal College of Surgeons in Ireland University of Medicine and Health Sciences 



Background
Medical education sees student migration and Western-devised programmes at international campuses(1). Feedback is increasingly recognised as complex and culturally contextualised. To understand the cultural factors which impact learners’ response to feedback, we explored medical students’ perspective in our unique setting of three transnational campuses with over ninety nationalities. 


Summary of work
This study was conducted at one medical school with three campuses in Western Europe, the Middle East and East Asia. Final-year medical students participated in 57 semi-structured interviews, which were analysed using template analysis. Key themes were compared across the three medical campuses and across student groups of varying national backgrounds. 

Results Themes were critically reviewed through Hofstede’s Dimensions, Small Cultures and Figured Worlds theories. Learners from all backgrounds and sites describe shared experience ofadistinctivecultureinMedicinewhichiscriticism-focussed andhierarchical.Learnersself- author as passive recipients with “stern”, senior clinicians and as peer collaborators with accessible faculty supervisors. While national cultural background rarely influenced feedback responses, early life feedback experiences commonly affected preferences. Harnessing peer support was highly valued in making the most of feedback. 


Discussion
While feedback models champion collaboration, in practice medical students’ are challenged by an inherent hierarchical culture. Learners adapt in two ways to this culture. They shifting their response and identity according to the supervisor role. They lean on peers to make sense of feedback and emotionally support in challenging environments. 


Conclusion
Cultural factors significantly impact learner feedback experiences, with distinctive mistreatment culture a common theme across diverse contexts.. National background appears of limited value in influencing feedback, but early-life experiences are under-explored in considering learner feedback perspectives. 

Take-home
Educators should consider how to embed psychological safety in feedback practices 


References (maximum three) 

1. Brouwer, E., Frambach, J., & Driessen, E. (2017). Mapping the scope of internationalized medical education. Paper presented at the The Network Towards Unity for Health Annual Conference.