Service of SURF
© 2025 SURF
A promising contribution of Learning Analytics is the presentation of a learner's own learning behaviour and achievements via dashboards, often in comparison to peers, with the goal of improving self-regulated learning. However, there is a lack of empirical evidence on the impact of these dashboards and few designs are informed by theory. Many dashboard designs struggle to translate awareness of learning processes into actual self-regulated learning. In this study we investigate a Learning Analytics dashboard based on existing evidence on social comparison to support motivation, metacognition and academic achievement. Motivation plays a key role in whether learners will engage in self-regulated learning in the first place. Social comparison can be a significant driver in increasing motivation. We performed two randomised controlled interventions in different higher-education courses, one of which took place online due to the COVID-19 pandemic. Students were shown their current and predicted performance in a course alongside that of peers with similar goal grades. The sample of peers was selected in a way to elicit slight upward comparison. We found that the dashboard successfully promotes extrinsic motivation and leads to higher academic achievement, indicating an effect of dashboard exposure on learning behaviour, despite an absence of effects on metacognition. These results provide evidence that carefully designed social comparison, rooted in theory and empirical evidence, can be used to boost motivation and performance. Our dashboard is a successful example of how social comparison can be implemented in Learning Analytics Dashboards.
MULTIFILE
Background: With the increased attention on implementing inquiry activities in primary science classrooms, a growing interest has emerged in assessing students’ science skills. Research has been concerned with the limitations and advantages of different test formats to assess students’ science skills. Purpose: This study explores the construction of different instruments for measuring science skills by categorizing items systematically on three subskill levels (science-specific, thinking, metacognition,) and different activities of the empirical cycle.Sample: The study included 128 5th and 6th grade students from seven primary schools in the Netherlands.Design and method: Seven measures were used: a paper-and-pencil test, three performance assessments, two metacognitive self-report tests and a test used as an indication of general cognitive ability.Results: Reliabilities of all tests indicate sufficient internal consistency. Positive correlations between the paper-and-pencil test and performance assessments reinforce that the different tests measure a common core of similar skills thus providing evidence for convergent validity. Results also show that students’ ability in performing scientific inquiry is significantly related to general cognitive ability. No relations are found between the measure of general metacognitive ability and the paper-and-pencil test or the three performance assessments. By contrast the metacognitive self-report test constructed to obtain information about application of metacognitive abilities in performing scientific inquiry, shows significant - although small - correlations with two performance assessments. Further explorations reveal sufficient scale reliabilities on subskill and empirical step level.Conclusions: The present study shows that science skills can be measured reliably by categorizing items on subskill and step level. Additional diagnostic information can be obtained by examining mean scores on both subskill and step level. Such measures are not only suitable for assessing students’ mastery of science skills but can also provide teachers diagnostic information to adapt their instructions and foster the learning process of their students.