This qualitative study explored emotional responses of two white Dutch student teachers during a Critical Race Theory (CRT) based course. Following Plutchik's (2001) classification of 32 emotions, the analysis of their weekly diaries resulted in the identification of 16 emotions. In both diaries similar emotional responses were identified. However, the analysis did not reveal a straightforward path these students emotionally went through. The number and types of emotional responses, both comfortable and uncomfortable, fluctuated weekly and occurred simultaneously in various combinations. Even when similar emotional responses were identified, students connected differently to the course content. This could be explained by different starting points both students had when entering the course. The findings add to past work by identifying a variety and complexity of emotional responses of white student teachers during a CRT based course and can be used to create course conditions to prepare teachers for contributing to anti-racist education.
Reporting of research findings is often selective. This threatens the validity of the published body of knowledge if the decision to report depends on the nature of the results. The evidence derived from studies on causes and mechanisms underlying selective reporting may help to avoid or reduce reporting bias. Such research should be guided by a theoretical framework of possible causal pathways that lead to reporting bias. We build upon a classification of determinants of selective reporting that we recently developed in a systematic review of the topic. The resulting theoretical framework features four clusters of causes. There are two clusters of necessary causes: (A) motivations (e.g. a preference for particular findings) and (B) means (e.g. a flexible study design). These two combined represent a sufficient cause for reporting bias to occur. The framework also features two clusters of component causes: (C) conflicts and balancing of interests referring to the individual or the team, and (D) pressures from science and society. The component causes may modify the effect of the necessary causes or may lead to reporting bias mediated through the necessary causes. Our theoretical framework is meant to inspire further research and to create awareness among researchers and end-users of research about reporting bias and its causes.
Objective: To automatically recognize self-acknowledged limitations in clinical research publications to support efforts in improving research transparency.Methods: To develop our recognition methods, we used a set of 8431 sentences from 1197 PubMed Central articles. A subset of these sentences was manually annotated for training/testing, and inter-annotator agreement was calculated. We cast the recognition problem as a binary classification task, in which we determine whether a given sentence from a publication discusses self-acknowledged limitations or not. We experimented with three methods: a rule-based approach based on document structure, supervised machine learning, and a semi-supervised method that uses self-training to expand the training set in order to improve classification performance. The machine learning algorithms used were logistic regression (LR) and support vector machines (SVM).Results: Annotators had good agreement in labeling limitation sentences (Krippendorff's α = 0.781). Of the three methods used, the rule-based method yielded the best performance with 91.5% accuracy (95% CI [90.1-92.9]), while self-training with SVM led to a small improvement over fully supervised learning (89.9%, 95% CI [88.4-91.4] vs 89.6%, 95% CI [88.1-91.1]).Conclusions: The approach presented can be incorporated into the workflows of stakeholders focusing on research transparency to improve reporting of limitations in clinical studies.