Risk matrices have been widely used in the industry under the notion that risk is a product of likelihood by severity of the hazard or safety case under consideration. When reliable raw data are not available to feed mathematical models, experts are asked to state their estimations. This paper presents two studies conducted in a large European airline and partially regarded the weighting of 14 experienced pilots’ judgment though software, and the calculation of agreement amongst 10 accident investigators when asked to assess the worst outcome, most credible outcome and risk level for 12 real events. According to the results, only 4 out of the 14 pilots could be reliably used as experts, and low to moderate agreement amongst the accident investigators was observed.
The Short-Term Assessment of Risk and Treatability: Adolescent Version (START:AV) is a risk assessment instrument for adolescents that estimates the risk of multiple adverse outcomes. Prior research into its predictive validity is limited to a handful of studies conducted with the START:AV pilot version and often by the instrument’s developers. The present study examines the START:AV’s field validity in a secure youth care sample in the Netherlands. Using a prospective design, we investigated whether the total scores, lifetime history, and the final risk judgments of 106 START:AVs predicted inpatient incidents during a 4-month follow-up. Final risk judgments and lifetime history predicted multiple adverse outcomes, including physical aggression, institutional violations, substance use, self-injury, and victimization. The predictive validity of the total scores was significant only for physical aggression and institutional violations. Hence, the short-term predictive validity of the START:AV for inpatient incidents in a residential youth care setting was partially demonstrated and the START:AV final risk judgments can be used to guide treatment planning and decision-making regarding furlough or discharge in this setting.
Current research on data in policy has primarily focused on street-level bureaucrats, neglecting the changes in the work of policy advisors. This research fills this gap by presenting an explorative theoretical understanding of the integration of data, local knowledge and professional expertise in the work of policy advisors. The theoretical perspective we develop builds upon Vickers’s (1995, The Art of Judgment: A Study of Policy Making, Centenary Edition, SAGE) judgments in policymaking. Empirically, we present a case study of a Dutch law enforcement network for preventing and reducing organized crime. Based on interviews, observations, and documents collected in a 13-month ethnographic fieldwork period, we study how policy advisors within this network make their judgments. In contrast with the idea of data as a rationalizing force, our study reveals that how data sources are selected and analyzed for judgments is very much shaped by the existing local and expert knowledge of policy advisors. The weight given to data is highly situational: we found that policy advisors welcome data in scoping the policy issue, but for judgments more closely connected to actual policy interventions, data are given limited value.
LINK
-Chatbots are being used at an increasing rate, for instance, for simple Q&A conversations, flight reservations, online shopping and news aggregation. However, users expect to be served as effective and reliable as they were with human-based systems and are unforgiving once the system fails to understand them, engage them or show them human empathy. This problem is more prominent when the technology is used in domains such as health care, where empathy and the ability to give emotional support are most essential during interaction with the person. Empathy, however, is a unique human skill, and conversational agents such as chatbots cannot yet express empathy in nuanced ways to account for its complex nature and quality. This project focuses on designing emotionally supportive conversational agents within the mental health domain. We take a user-centered co-creation approach to focus on the mental health problems of sexual assault victims. This group is chosen specifically, because of the high rate of the sexual assault incidents and its lifetime destructive effects on the victim and the fact that although early intervention and treatment is necessary to prevent future mental health problems, these incidents largely go unreported due to the stigma attached to sexual assault. On the other hand, research shows that people feel more comfortable talking to chatbots about intimate topics since they feel no fear of judgment. We think an emotionally supportive and empathic chatbot specifically designed to encourage self-disclosure among sexual assault victims could help those who remain silent in fear of negative evaluation and empower them to process their experience better and take the necessary steps towards treatment early on.