Social network analysis can be a powerful tool to better understand the social context of terrorist activities, and it may also offer potential leads for agencies to intervene. Our access to Dutch police information allows us to analyse the relational features of two networks that include actors who planned acts of terrorism and were active in the dissemination of a Salafi-Jihadi interpretation of Islam (n = 57; n = 26). Based on a mixed-method approach that combines qualitative and more formal statistical analysis (exponential random graph models), we analyse the structural characteristics of these networks, individual positions and the extent to which radical leaders, pre-existing family and friendship ties and radicalizing settings affect actors to form ties. We find that both networks resemble a core–periphery structure, with cores formed by a densely interconnected group of actors who frequently meet in radicalizing settings. Based on our findings, we discuss the potential effects of preventive and repressive measures developed within the Dutch counterterrorism framework.
Artificial Intelligence (AI) offers organizations unprecedented opportunities. However, one of the risks of using AI is that its outcomes and inner workings are not intelligible. In industries where trust is critical, such as healthcare and finance, explainable AI (XAI) is a necessity. However, the implementation of XAI is not straightforward, as it requires addressing both technical and social aspects. Previous studies on XAI primarily focused on either technical or social aspects and lacked a practical perspective. This study aims to empirically examine the XAI related aspects faced by developers, users, and managers of AI systems during the development process of the AI system. To this end, a multiple case study was conducted in two Dutch financial services companies using four use cases. Our findings reveal a wide range of aspects that must be considered during XAI implementation, which we grouped and integrated into a conceptual model. This model helps practitioners to make informed decisions when developing XAI. We argue that the diversity of aspects to consider necessitates an XAI “by design” approach, especially in high-risk use cases in industries where the stakes are high such as finance, public services, and healthcare. As such, the conceptual model offers a taxonomy for method engineering of XAI related methods, techniques, and tools.
MULTIFILE