Senior Lecturer
During my work as a teacher and policy advisor in primary, secondary, and tertiary education I became fascinated with (the lack of) sustainability of innovations in education. It seems like many innovations in education are reinvented multiple times without being evaluated thoroughly. Did we not do this before? Did it contribute to the pursuit of our goals? I decided to become a researcher in order to conduct rigorous evaluations that can improve the learning capacity of educational organizations (and perhaps even the broader field). During my PhD, and since then as a researcher at the Amsterdam University of Applied Sciences, I conducted field experiments to learn more about the effectiveness of educational interventions in higher education. I have, thus far, tested a goal-setting intervention, a chatbot-coach, supplemental instruction, and a summer bridge program, and will look for further opportunities to deepen my understanding of contributors to student success. Apart from performing research I also like to contribute to making outcomes and implications of scientific research understandable and accessible for practitioners by publishing short explainers and posts that summarize important findings.
Many have suggested that AI-based interventions could enhance learning by personalization, improving teacher effective ness, or by optimizing educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible scientific experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by research ethics committees do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are (1) A process of gradual testing (2) Risk- and side-effect detection (3) Explainability and severity, and (4) Democratic oversight. These procedures can be used by researchers and ethics committees to enable responsible experiment with AI interventions in educational settings. Implementation and compliance will require collaboration between researchers, industry, policy makers, and educational institutions.
Studentassistenten verrichten verschillende soorten onder-wijstaken in het hoger onderwijs. Deze systematische literatuurstudiebracht het onderzoek in kaart naar hoe studentassistenten worden voor-bereid op hun inzet in het onderwijs, welke soorten onderwijs ze ver-zorgen en wat dit oplevert voor de studentassistenten en hun studenten.De studentassistenten werden vooral bij praktisch vaardigheidsonder-wijs en laboratoriumonderwijs in medische en scheikundige opleidingeningezet, en in mindere mate ook bij casusonderwijs en studievaardig-heden. Qua voorbereiding hadden ze het meeste baat bij een trainingwaar het verwachte gedrag voorgedaan wordt, waar ze kunnen oefenenen hier feedback op krijgen. Wanneer onderwijs door studentassisten-ten als aanvulling op het bestaande onderwijs wordt georganiseerd leidthet tot hogere studenttevredenheid en betere prestaties bij de studen-ten die dit aangeboden krijgen t.o.v. wie dit niet krijgt aangeboden. Deprestaties en tevredenheid van studenten die practica van studentas-sistenten versus docenten zijn vergelijkbaar. Bij de organisatie van stu-dentassistenten in het onderwijs kan geleerd worden van de twee stro-mingen die nu prevaleren: Supplementele Instructie (si-pass ) enPeerAssistedLearning(pal ). Centrale coördinatie van het opleiden van bege-leiders en studentassistenten, duidelijke complementaire functieprofie-len, inzet bij praktische vakken, en inbedding binnen de opleidingen opbasis van passende leeruitkomsten kunnen bijdragen aan duurzame bor-ging.Students perform various types of educational tasks in higher education as teaching assistants. This systematic literature review mapped out the research on how teaching assistants are prepared for their tasks,what types of instruction they provide, and what this yields for both teaching assistants and students. The teaching assistants were primarily deployed in practical skills education and laboratory education in medical and chemistry programs, and to a lesser extent also in case-based education and metacognitive education. In terms of preparation, they benefited most from training that demonstrates expected behaviour, provides opportunities for practice, and offers feedback. The deployment of teaching assistants in education, as a supplement to regular instruction, resulted in higher student satisfaction and better performance. Regarding practical sessions, students taught by teaching assistants did not perform differently or express less satisfaction than those taught by teachers. In organizing student assistants in education, lessons can be learned from the two prevailing approaches: SI-PASS and PAL. Both central coordination with clear complementary job profiles for student assistants or integration within programs based on appropriate learning outcomes can contribute to sustainable implementation.
In de zomer van 2024 onderzochten we de impact van het zomerbrugprogramma voor eerstegeneratiestudenten van de HvA. We deden dit in samenwerking met onderzoekers Tieme Janssen, Felicitas Biwer, Niklas Wenzel en Sanne van Herpen, en met Mohammed Skori, Aimee Kaandorp en Sabina Nahar van Studentenzaken. We vergeleken de uitwerking van twee varianten van het programma met elkaar en met de ervaringen van een controlegroep. Op basis van vragenlijsten, observaties en interviews + analyse van behaalde studiepunten en uitval, waren enkele conclusies: • Scala aan motivaties voor deelname; overlappende doelgroepen.• Bereik Tune In ook groot onder niet-eerstegeneratiestudenten• Programma B lijkt goed te werken (alhoewel ook schools/saai ervaren door deel), vooral voor de niet-doelgroep.• Programma A lijkt te werken voor 1e gen studenten en negatief voor niet -doelgroep.• Beide programma’s voldoen aan een belangrijke behoefte aan orientatie door informatie en het opbouwen van een sociaalnetwerk• Voor het detecteren van effecten op uitval of studiepunten is waarschijnlijk een grotere sample nodig.