Among other things, learning to write entails learning how to use complex sentences effectively in discourse. Some research has therefore focused on relating measures of syntactic complexity to text quality. Apart from the fact that the existing research on this topic appears inconclusive, most of it has been conducted in English L1 contexts. This is potentially problematic, since relevant syntactic indices may not be the same across languages. The current study is the first to explore which syntactic features predict text quality in Dutch secondary school students’ argumentative writing. In order to do so, the quality of 125 argumentative essays written by students was rated and the syntactic features of the texts were analyzed. A multilevel regression analysis was then used to investigate which features contribute to text quality. The resulting model (explaining 14.5% of the variance in text quality) shows that the relative number of finite clauses and the ratio between the number of relative clauses and the number of finite clauses positively predict text quality. Discrepancies between our findings and those of previous studies indicate that the relations between syntactic features and text quality may vary based on factors such as language and genre. Additional (cross-linguistic) research is needed to gain a more complete understanding of the relationships between syntactic constructions and text quality and the potential moderating role of language and genre.
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F 1 and F 2) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the djmixed add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
Music interventions are used for stress reduction in a variety of settings because of the positive effects of music listening on both physiological arousal (e.g., heart rate, blood pressure, and hormonal levels) and psychological stress experiences (e.g., restlessness, anxiety, and nervousness). To summarize the growing body of empirical research, two multilevel meta-analyses of 104 RCTs, containing 327 effect sizes and 9,617 participants, were performed to assess the strength of the effects of music interventions on both physiological and psychological stress-related outcomes, and to test the potential moderators of the intervention effects. Results showed that music interventions had an overall significant effect on stress reduction in both physiological (d = .380) and psychological (d = .545) outcomes. Further, moderator analyses showed that the type of outcome assessment moderated the effects of music interventions on stress-related outcomes. Larger effects were found on heart rate (d = .456), compared to blood pressure (d = .343) and hormone levels (d = .349). Implications for stress-reducing music interventions are discussed.