Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
Assessment in higher education (HE) is often focused on concluding modules with one or more tests that students need to pass. As a result, both students and teachers are primarily concerned with the summative function of assessment: information from tests is used to make pass/fail decisions about students. In recent years, increasing attention has been paid to the formative function of assessment and focus has shifted towards how assessment can stimulate learning. However, this also leads to a search for balance between both functions of assessment. Programmatic assessment (PA) is an assessment concept in which their intertwining is embraced to strike a new balance. A growing number of higher education programmes has implemented PA. Although there is consensus about the theoretical principles that form the basis for the design of PA, programmes make various specific design choices based on these principles, fitting with their own context. This paper provides insight into the design choices that programmes make when implementing PA and into the considerations that play a role in making these design choices. Such an overview is important for research purposes because it creates a framework for investigating the effects of different design choices within PA.
A model for programmatic assessment in action is proposed that optimizes assessment for learning as well as decision making on learner progress. It is based on a set of assessment principles that are interpreted from empirical research. The model specifies cycles of training, assessment and learner support activities that are completed by intermediate and final moments of evaluation on aggregated data-points. Essential is that individual data-points are maximized for their learning and feedback value, whereas high stake decisions are based on the aggregation of many data-points. Expert judgment plays an important role in the program. Fundamental is the notion of sampling and bias reduction for dealing with subjectivity. Bias reduction is sought in procedural assessment strategies that are derived from qualitative research criteria. A number of challenges and opportunities are discussed around the proposed model. One of the virtues would be to move beyond the dominating psychometric discourse around individual instruments towards a systems approach of assessment design based on empirically grounded theory.
MULTIFILE
Wil je als docententeam meer zicht krijgen op het gehele toetsprogramma? Wil je kritisch kijken naar mogelijke verbeterpunten? Of ben je bezig met herontwerp? Met KIT2.0 kijk je als opleidingsteam vanuit de principes van programmatisch toetsen naar de inrichting van de opleiding.Doel Met KIT2.0 willen we opleidingsteams helpen om kritisch naar het curriculum en het toetsprogramma te kijken. Dit doen we aan de hand van vijf kwaliteitscriteria: fitness for purpose, validiteit, leerfunctie, beslisfunctie en condities. Resultaten Op de website van KIT2.0 vind je informatie en filmpjes met verdere uitleg. Via de website kun je ook (gratis) inloggen en zelf aan slag met KIT2.0. Op de website www.husite.nl/toetsing vind je informatie en praktijkvoorbeelden over programmatisch toetsen Blog over interview met Liesbeth Baartman over KIT2.0. Korte uitleg van dr. Liesbeth Baartman (2017) programmatisch toetsen. Toetsbijeenkomst Hogeschool van Rotterdam. Keynote van dr. Liesbeth Baartman (2017) met een inleiding over toetsprogramma’s. Fontys Toetscongres. Baartman, L.K.J., Kloppenburg, R., & Prins, F.J. (2017). Kwaliteit van toetsprogramma’s. In H. van Berkel, A. Bax, & D. Joosten-ten-Brinke (Red.). Toetsen in het Hoger Onderwijs, pp.38-49. Bohn Stafleu van Loghum Van der Vleuten, C.P.M., Schuwirth, L.T.W., Driessen, E., Dijkstra, J., Tigelaar, D., Baartman, L.K.J., & Van Tartwijk, J. (2012). A model for programmatic assessment fit for purposes. Medical Teacher, 34, 205-214. Dronkers, J., de Kwant, E., Kruitwagen, C., & Baartman, L. (2017). Kwantitatieve analyse van een toetsprogramma. Examens, 3, augustus. Looptijd 01 september 2018 - 01 september 2020 Aanpak KIT2.0 is gebaseerd op wetenschappelijk onderzoek naar programmatisch toetsen. De oorsprong van KIT2.0 ligt in de promotieonderzoeken van dr. Liesbeth Baartman en dr. Raymond Kloppenburg (waaruit KIT1.0 voortkwam). KIT2.0 is ontwikkeld op basis van nieuwste inzichten in de wetenschappelijk literatuur over programmatisch toetsen én 10 jaar praktijkervaringen. KIT2.0 is ontwikkeld in valideringsrondes met opleidingen en wetenschappers. Meedoen? Wil je als opleiding meedoen aan het onderzoek naar KIT2.0? Neem dan contact op met Liesbeth Baartman. We werken aan de evaluatie en verbetering van KIT2.0 op basis van praktijkervaringen.