Important gender differences, relating to trauma history, offending and mental health needs are not sufficiently considered in most (risk) assessment and treatment procedures in forensic practice. We developed guidelines for gender-responsive work in Dutch forensic mental health care. The experiences of practitioners and forensic psychiatric patients were collected and analyzed by means of an online survey (n = 295), interviews with professionals (n = 22), female (n = 8) and male (n = 3) patients. Guidelines regarding gender-sensitive (risk) assessment and trauma-informed care were rated as most relevant in the survey. In the interviews we focused on experiences and wishes for trauma treatment and gender-mixed treatment. Practical guidelines were written based on the results of the survey, interviews and literature, and presented in expert meetings with patients and practitioners, and further refined based on their comments. Applying these guidelines may contribute to improved treatment for female patients thereby preventing relapse.
LINK
Chapter 17 in 'Challenging Bias in Forensic Psychological Assessment and Testing - Theoretical and Practical Approaches to Working with Diverse Populations'.
Content moderation is commonly used by social media platforms to curb the spread of hateful content. Yet, little is known about how users perceive this practice and which factors may influence their perceptions. Publicly denouncing content moderation—for example, portraying it as a limitation to free speech or as a form of political targeting—may play an important role in this context. Evaluations of moderation may also depend on interpersonal mechanisms triggered by perceived user characteristics. In this study, we disentangle these different factors by examining how the gender, perceived similarity, and social influence of a user publicly complaining about a content-removal decision influence evaluations of moderation. In an experiment (n = 1,586) conducted in the United States, the Netherlands, and Portugal, participants witnessed the moderation of a hateful post, followed by a publicly posted complaint about moderation by the affected user. Evaluations of the fairness, legitimacy, and bias of the moderation decision were measured, as well as perceived similarity and social influence as mediators. The results indicate that arguments about freedom of speech significantly lower the perceived fairness of content moderation. Factors such as social influence of the moderated user impacted outcomes differently depending on the moderated user’s gender. We discuss implications of these findings for content-moderation practices.