Times are shown in your local time zone GMT
Ad-blocker Detected - Your browser has an ad-blocker enabled, please disable it to ensure your attendance is not impacted, such as CPD tracking (if relevant). For technical help, contact Support.
Assessment tools and instruments
Oral Presentation
Oral Presentation
11:30 am
28 February 2024
M206
Session Program
11:30 am
ANDREW BARTLETT1
Ines Krass1, Irene Um1, and Carl Schneider1
1 Sydney University
Ines Krass1, Irene Um1, and Carl Schneider1
1 Sydney University
Background:
Pharmacist preceptors play a pivotal role in nurturing the professional growth of students and interns during pre-registration training. Ensuring the quality and consistency of preceptor competency assessment has emerged as a critical necessity. A shared consensus is essential to determine the competencies warranting assessment among preceptors and to identify a suitable assessment strategy.
Pharmacist preceptors play a pivotal role in nurturing the professional growth of students and interns during pre-registration training. Ensuring the quality and consistency of preceptor competency assessment has emerged as a critical necessity. A shared consensus is essential to determine the competencies warranting assessment among preceptors and to identify a suitable assessment strategy.
Summary of the Study:
Employing a modified Delphi method via an anonymous survey, our study spanned three rounds. Drawing from 16 competencies identified through a prior literature review (1), a panel of experts encompassing stakeholder organizations, policy developers, professionals and academics evaluated each competency. Participants offered insights on phrasing, the necessity of assessment (mandatory, preferable, or unnecessary), assessment feasibility, who should assess, and the preferred mode of assessment mode. A consensus threshold of 70% guided decision-making, with refinement undertaken for non-consensual topics in subsequent rounds.
Results:
Out of 54 experts approached, 20 completed the initial round, followed by 13 in the second round, and nine in the third round. Consensus was achieved for 17 competencies, for which assessment was determined as mandatory for eight and preferable for nine competencies. Consensus was also reached on who should assess and the mode of assessment for 12 competencies.
Discussion:
A modified Delphi method has provided clarity on which competencies should be assessed, who should perform assessment and the mode of assessment. The consensus-building process underscores the significance of balancing assessment feasibility and acceptability, ideally fostering a flexible approach to pharmacist preceptor evaluation. The consensus recommendations provide evidence to support a proposed pharmacist preceptor competency assessment framework which addresses a gap for accreditation bodies and education providers alike.
Conclusion:
A competency-based assessment strategy was developed via consensus of pharmacy education experts. The outcome is a significant step towards development of a competency assessment framework, thereby enhancing the pharmacy educational landscape.
A competency-based assessment strategy was developed via consensus of pharmacy education experts. The outcome is a significant step towards development of a competency assessment framework, thereby enhancing the pharmacy educational landscape.
References (maximum three)
1. Bartlett AD, Um IS, Luca EJ, Krass I, Schneider CR. Measuring and assessing the competencies of preceptors in health professions: a systematic scoping review. BMC Medical Education. 2020;20(1):165.
11:45 am
Emma Bartle1,2
Anne Hill3, Jodie Copley3, Rebecca Olson4, Tessa Barnett2,3, Ruth Dunwoodie3 and Karen Luetsch5
1 School of Allied Health, The University of Western Australia
2 School of Dentistry, The University of Queensland
3 School of Health and Rehabilitation Sciences, The University of Queensland
4 School of Social Science, The University of Queensland
5 School of Pharmacy, The University of Queensland
Anne Hill3, Jodie Copley3, Rebecca Olson4, Tessa Barnett2,3, Ruth Dunwoodie3 and Karen Luetsch5
1 School of Allied Health, The University of Western Australia
2 School of Dentistry, The University of Queensland
3 School of Health and Rehabilitation Sciences, The University of Queensland
4 School of Social Science, The University of Queensland
5 School of Pharmacy, The University of Queensland
Background
At a broad level, interprofessional education aims to achieve a social change agenda by improving teamwork and lessening power relationships within healthcare teams (Olson 2015). To achieve this, future health professionals must be prepared with skills to meet the social, emotional and interpersonal demands of interprofessional practice (IPP). The VOTIS (Video Observation Tool for Interprofessional Skills) was developed to foster reflexive dispositional learning of interprofessional skills, informed by theories of socio-personal learning and a novel video-based methodology (Hill et al 2023; Olson et al 2023). Feedback from the initial pilot indicated the need for further tool refinement to increase utility.
Summary of work
The VOTIS was refined through a two-stage process. A Clinical Educator Reference Group was formed to establish content validity. Interrater reliability was established and the revised tool piloted across a range of contexts. Student and educator feedback was collected through surveys and focus groups.
Results
Students reported the use of structured reflection on their IPP skills improved their awareness of their own communication with team members and led to behaviour change. Clinical educators indicated the VOTIS provided a structured framework to deliver constructive feedback to support interprofessional student learning and skill development.
Discussion
The VOTIS addresses a gap in existing IP assessment by focusing on interactions of students engaged in non-contact interactions, such as case conferences, treatment planning, and debriefing. Existing tools focus on team interactions with patients, however development of IP skills often occurs when students are not with patients and it is in these situations that additional issues of power and conflict may emerge.
Conclusions
The VOTIS incorporates video to provide observational data about students’ IP skills that articulates into discipline-specific tools for formative assessment. The significance of this tool lies in its ability to assess students’ IPP using evidence of observable behaviours.
References (maximum three)
1. Olson, R. (2015). How would an egalitarian health care system operate? Power and conflict in interprofessional education. Medical Education. 2015;49(4):353–354. doi: 10.1111/medu.12686
2. Hill AE, Bartle E, Copley JA, Olson R, Dunwoodie R, Barnett T et al. The VOTIS, part 1: development and pilot trial of a tool to assess students’ interprofessional skill development using video-reflexive ethnography. Journal of Interprofessional Care. 2023;37(2):223-231. Epub 2022 Apr 11. doi: 10.1080/13561820.2022.2052270
3. Olson RE, Copley JA, Bartle E, Hill AE, Barnett T, Dunwoodie R et al. The VOTIS, part 2: Using a video-reflexive assessment activity to foster dispositional learning in interprofessional education. Journal of Interprofessional Care. 2023;37(2):232-239. doi: 10.1080/13561820.2022.2037531
12:00 pm
Amanda Wilson1
1 UTS
1 UTS
1. Background:
In the contemporary academic environment, students desire for more consistent, personalised, and actionable feedback. This project describes a novel rubric design called the ‘marking algorithm’. This rubric is a step-by-step, decision-making tool designed to guide educators through a standardised process for assigning marks. The algorithm acts as a systematic guide which evaluates various criteria, such as accuracy, relevance or reference quality. The algorithm pathways lead the marker from one criterion to the next based on previous evaluations. For each criterion encountered, the algorithm assigns a specific grade or number of points. The process is iterative so if subsequent sections provide additional context, the algorithm may return to earlier decision nodes to adjust scores.
In the contemporary academic environment, students desire for more consistent, personalised, and actionable feedback. This project describes a novel rubric design called the ‘marking algorithm’. This rubric is a step-by-step, decision-making tool designed to guide educators through a standardised process for assigning marks. The algorithm acts as a systematic guide which evaluates various criteria, such as accuracy, relevance or reference quality. The algorithm pathways lead the marker from one criterion to the next based on previous evaluations. For each criterion encountered, the algorithm assigns a specific grade or number of points. The process is iterative so if subsequent sections provide additional context, the algorithm may return to earlier decision nodes to adjust scores.
2. Why is the topic important for practice?:
Current evaluation methods mean discrepancies in marking are inevitable. A standardised and transparent system like the "marking algorithm" can streamline the marking process, ensuring consistency and enhancing the overall educational experience.
3. Symposium Format & Participant Engagement Methods:
- Opening Session: Changing dynamics of academic feedback and the need for evolution.
- Interactive Workshop: Participants will be guided on converting a traditional rubric into a marking algorithm, offering hands-on experience.
- Breakout Sessions:
- A: Exploring the intricacies of the marking algorithm: design, process, and functionality.
- B: AI's role in academic marking: a glimpse into the future.
- Q&A Session:
- Feedback & Reflection: Attendees share thoughts, concerns, and suggestions.
4. Take-home Messages / Symposium Outcomes / Implications for Further Research and/or Practice:
Awareness of the need for more consistent and transparent academic evaluations.
- Skill Acquisition: Capability to convert traditional rubrics into a marking algorithm.
- Vision for the Future: Potential of AI and algorithmic tools in reshaping the academic marking.
- Action Items: Educators will gain insights and tools to implement and adapt the marking algorithm.
References (maximum three)
- No references
12:15 pm
Chaoyan Dong1
Vaikunthan Rajaratnam2
1 Sengkang General Hospital
2 Khoo Teck Puat Hospital
Vaikunthan Rajaratnam2
1 Sengkang General Hospital
2 Khoo Teck Puat Hospital
Background
Health professions educators have been conducting online teaching. How do educators know if technology use is effective? This rubric will help educators to self-assess technology- integrated teaching. We summarized the literature on instructional design, multimedia learning, and applied the best practices in health professions education, then developed the rubric for educators to self-assess technology integration.
Summary of work
The rubric includes four areas, preparation, online teaching, learner engagement, and summary of the online teaching. For each component, the educator will use the scale of Baseline, Effective, and Excellent. The descriptors are provided to illustrate each point.
Preparation includes:
- Make sure technology works on multiple devices.
- Design good learning objectives (LOs) using the SMART (Specific, Measurable,
- Attainable, Relevant, Timely) criteria.
- Align LOs, content, activities, and assessment.
- Assign appropriate pre-reading/digital content/tasks, and ensure learners complete
- these tasks before class.
Online teaching includes:
- Introduce Learning Objectives (LOs) & outlines.
- Use effective multimedia, e.g., audio, graphics, text, and video.
Learner engagement includes:
- Chunk the info to gauge learners’ attention online.
- Promote collaboration through Breakout Room or White Board.
- Respond to learners’ questions timely through multiple channels, such as Chatbox.
- Facilitate online discussion and keep it on track.
- Use multiple assessment strategies to measure Los.
The summary of online teaching includes:
- Reinforce the LOs to summarize the teaching.
- Invite learners for feedback and provide follow-up opportunities.
Results
For the pilot testing, this rubric was piloted by five colleagues to assess their teaching. The overall feedback was that the rubric provided a guiding framework, it was easy to understand and use in self-assessing online teaching.
Discussion
We shared the content development of the rubric; however, it requires rigorous testing and validation for its validity and reliability.
Take-home messages
The rubric could guide health professions educators in planning and conducting online teaching.
References (maximum three)
1. Mayer, R.E. (2009) Multimedia Learning, 2nd edn. Cambridge: Cambridge University Press.
2. Morrison , G.R., Ross, S.M., Morrison, J.R. & Kalman, H.K. (2019 ) Designing Effective Instruction, 8th edn. Hoboken, NJ : Wiley.