Het hbo kampt met gering studiesucces van studenten in termen van studievoortgang, uitval en gepercipieerde competentie. Welke maatregelen kan het hbo nemen? Deze bijdrage, gebaseerd op het proefschrift ‘One size fits all?’, gaat over leerpsychologische factoren en interactionalistische factoren waaruit studiesucces in het eerste jaar hbo kan worden verklaard. Het proefschrift laat zien hoe vele variabelen samenhangen met studiesucces. Met behulp van lineair structurele modellen doet de auteur voorzichtige uitspraken over de effecten die factoren op elkaar en op studiesucces (uitval, studievoortgang en gepercipieerde competentie) hebben. Van de leerpsychologische factoren hadden zelfvertrouwen en motivatie-aspecten (intrinsieke motivatie en procrastinatie) de sterkste samenhang met studiesucces. Vanuit interactionalistisch perspectief was intentie om te blijven de meest cruciale factor. De samenhangen met studiesucces van andere factoren uit beide benaderingen waren gering (diepgaand leren) of fluctueerden sterk per model en per groep (zelfregulatie, self-efficacy; sociale en academische integratie tevredenheid met actief leren en academische kennis en vaardigheden, contacturen, zelfstudie). Vele factoren spelen een rol, die bovendien naar achtergrondkenmerken (geslacht, vooropleiding, etniciteit, discipline) verschillend met elkaar samenhangen. Ook hebben verschillende factoren niet dezelfde, maar eerder tegenstrijdige effecten op concurrerende leeruitkomsten, zoals het behalen van credits en het verwerven van competentie. Het resultaat is niet een kookboek met pasklare recepten voor het oplossen van het rendementsprobleem. Voor de onderwijspraktijk (instellingen, opleidingen, docenten) zijn dat belangrijke inzichten waarmee meer gedaan kan worden. Meer onderzoek blijft gewenst naar de interactie tussen factoren uit verschillende verklaringsmodellen en -theorieën naar de effecten van rendementsmaatregelen.
Many have suggested that AI-based interventions could enhance learning by personalization, improving teacher effective ness, or by optimizing educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible scientific experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by research ethics committees do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are (1) A process of gradual testing (2) Risk- and side-effect detection (3) Explainability and severity, and (4) Democratic oversight. These procedures can be used by researchers and ethics committees to enable responsible experiment with AI interventions in educational settings. Implementation and compliance will require collaboration between researchers, industry, policy makers, and educational institutions.
Autonomous learning behavior is an important skill for students, but they often do not master it sufficiently. We investigated the potential of nudging as a teaching strategy in tertiary education to support three important autonomous learning behaviors: planning, preparing for class, and asking questions. Nudging is a strategy originating from behavioral economics used to influence behavior by changing the environment, and consists of altering the choice environment to steer human behavior. In this study, three nudges were designed by researchers in co-creation with teachers. A video booth to support planning behavior (n = 95), a checklist to support class preparation (n = 148), and a goal-setting nudge to encourage students to ask questions during class (n = 162) were tested in three field experiments in teachers’ classrooms with students in tertiary education in the Netherlands. A mixed-effects model approach revealed a positive effect of the goal-setting nudge on students’ grades and a marginal positive effect on the number of questions asked by students. Additionally, evidence for increased self-reported planning behavior was found in the video booth group—but no increase in deadlines met. No significant effects were found for the checklist. We conclude that, for some autonomous learning behaviors, primarily asking questions, nudging has potential as an easy, effective teaching strategy.
MULTIFILE