Presentation Description
Louise Belfield1
Steven Roberts1, Chinedu Agwu1, Shafeena Anas1, Lisa Jackson1, Michael Ferenczi1 and Naomi Low-Beer1
1 Brunel University London
Steven Roberts1, Chinedu Agwu1, Shafeena Anas1, Lisa Jackson1, Michael Ferenczi1 and Naomi Low-Beer1
1 Brunel University London
Background:
Brunel Medical School opened in 2022, embedding Team-Based Learning within a Programmatic Assessment strategy. Individual readiness assurance tests (iRATs), sampled longitudinally, count towards student progression decisions, requiring rapid creation of a high- volume, high-quality bank of single-best-answer (SBA) questions. Large Language Models (LLMs) such as ChatGPT are widely utilised for rapid text generation (1), presenting opportunity to expedite question-bank building.
Brunel Medical School opened in 2022, embedding Team-Based Learning within a Programmatic Assessment strategy. Individual readiness assurance tests (iRATs), sampled longitudinally, count towards student progression decisions, requiring rapid creation of a high- volume, high-quality bank of single-best-answer (SBA) questions. Large Language Models (LLMs) such as ChatGPT are widely utilised for rapid text generation (1), presenting opportunity to expedite question-bank building.
Summary of work:
Using an iterative process, our team of scientists, clinicians, and educators refined and tested a framework of ChatGPT instructions which enabled generation of high-quality SBAs, integrating specific commands for SBA writing.
Results:
ChatGPT could: Create: new SBAs from scratch, including items for training, without compromising the existing question bank, and could generate meaningful feedback. Quality assure: SBAs, ensuring specific structural requirements were met (including passing the “cover test”), incorporating EDI considerations, meeting regulatory requirements, and pass item performance analysis (psychometrics). Adapt: items, whilst ensuring a consistent style, improving those that are poorly performing, generating SBAs from different item formats, changing the educational context (e.g. year of study), modifying the regulatory, geographical, linguistic and cultural context, and altering question complexity. Engage: educators in co- creation of SBAs, developing a culture of teamwork and creating a community of practice. The effectiveness of ChatGPT in achieving these outcomes was influenced by the precision of commands and by specifying the professional discipline of the question writer.
Discussion:
There are few reports on how faculty may utilise LLMs to develop and quality assure SBAs (2). We demonstrate how optimised ChatGPT commands can address challenging aspects of SBA writing, and yield high-volume, high-quality items.
Conclusions:
ChatGPT can accelerate SBA writing and quality assurance, adapting items for different global and educational contexts.
Take-home message:
With optimised use, ChatGPT is an effective educational tool to create, quality-assure and adapt SBA items, and engage the learning community.
References (maximum three)
- Sullivan, M., Kelly, A. and McLaughlan, P. (2023) ‘ChatGPT in higher education: Considerations for academic integrity and student learning’, Journal of Applied Learning and Teaching, 6(1). Available at: https://doi.org/10.37074/jalt.2023.6.1.17.
- Sabzalieva, E., and Valentini, A. (2023) ChatGPT and Artificial Intelligence in higher education: Quick start guide. Unesco. https://unesdoc.unesco.org/ark:/48223/pf0000385146