Presentation Description
Marcos Rojas1
Sharon F. Chen1, Kathleen Gutierrez1, Argenta Price1 and Shima Salehi1
1 Stanford University
Sharon F. Chen1, Kathleen Gutierrez1, Argenta Price1 and Shima Salehi1
1 Stanford University
Background:
Clinical reasoning (CR) is pivotal in healthcare education, yet its reflection component is often underrepresented in assessment tools (1). Inspired by STEM's problem-solving focus (2), our study presents a unique AI-driven tool that delves into the CR process, capturing often- overlooked reflection practices and pinpointing areas of enhancement. By harnessing AI methodologies, we not only provide the possibility of evaluating reflection but also magnify the scalability of this tool, positioning it to strengthen clinical reasoning education across all stages of medical training.
Clinical reasoning (CR) is pivotal in healthcare education, yet its reflection component is often underrepresented in assessment tools (1). Inspired by STEM's problem-solving focus (2), our study presents a unique AI-driven tool that delves into the CR process, capturing often- overlooked reflection practices and pinpointing areas of enhancement. By harnessing AI methodologies, we not only provide the possibility of evaluating reflection but also magnify the scalability of this tool, positioning it to strengthen clinical reasoning education across all stages of medical training.
Summary of work:
Previously, we designed an online assessment for physicians and students to capture their CR execution steps and reflective practices behind them. After initial data evaluation, the assessment was modified for better reflection capture and tested on medical practitioners from first-year medical students to expert physicians. Their responses and feedback shaped the revised assessment and a reflection-focused scoring codebook. Concurrently, an AI model is under development for future autonomous grading.
Results:
As of August 2023, our pilot assessment involved two medical students, a resident, and a physician, with an emphasis on CR reflection. Our plan is to extend this pilot to six more participants from diverse medical career stages by September 2023. Post-pilot, we will examine the AI's grading capability against human coders.
Discussion:
Melding AI capabilities with CR assessments offers profound insights into medical decision- making. The assessment, focusing on both execution and underlying reflective thought, pinpoints clinical reasoning gaps, paving the way for tailored educational interventions.
Conclusions:
Integrating AI into CR assessment yields profound insights into medical cognitive processes. This innovative tool enhances evaluation depth and promotes continuous learning in medical education.
Take-home message:
Utilizing AI in clinical reasoning assessment presents a pivotal step in refining and advancing medical education assessment methods.
References (maximum three)
1. Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, et al. Clinical Reasoning Assessment Methods: A Scoping Review and Practical Guidance. Acad Med. 2019 Jun;94(6):902–12.
2. Price A, Salehi S, Burkholder E, Kim C, Isava V, Flynn M, et al. An accurate and practical method for assessing science and engineering problem-solving expertise. Int J Sci Educ. 2022 Sep 2;44(13):2061–84.