Workshop Facilitators: Stefan Schauber, Dr Dario Cecilio Fernandes
The Ottawa consensus framework for good assessments (Norcini et al. 2018) highlights several criteria for quality of assessment. Especially in the context of high-stakes assessment, the criteria of Validity, Reliability and Equivalence can be substantiated using psychometric methods. This workshop will facilitate an understanding of the basic features of Item Response Theory (IRT; DeMars 2010) and how it can practically help to evaluate aspects of the quality of assessments as well as support decision making process. In educational measurement, IRT has become the de-facto standard when analyzing data from high-stakes tests. Furthermore, IRT is the foundation for many technology-enhanced assessments and for computerized adaptive testing more specifically. Hence, an understanding of the underlying statistical procedures will help practitioners to capitalize on new developments in the area of digital assessments. Participants will be able to discuss the advantages and disadvantages of IRT and to interpret the most important outcomes of an IRT analysis.