Project objectives Radicalisation research leads to ethical and legal questions and issues. These issues need to be addressed in way that helps the project progress in ethically and legally acceptable manner. Description of Work The legal analysis in SAFIRE addressed questions such as which behavior associated with radicalisation is criminal behaviour. The ethical issues were addressed throughout the project in close cooperation between the ethicists and the researchers using a method called ethical parallel research. Results A legal analysis was made about criminal law and radicalisation. During the project lively discussions were held in the research team about ethical issues. An ethical justification for interventions in radicalisation processes has been written. With regard to research ethics: An indirect informed consent procedure for interviews with (former) radicals has been designed. Practical guidelines to prevent obtaining information that could lead to indirect identification of respondents were developed.
DOCUMENT
This document presents the findings of a study into methods that can help counterterrorism professionals make decisions about ethical problems. The study was commissioned by the Research and Documentation Centre (Wetenschappelijk Onderzoeken Documentatiecentrum, WODC) of the Dutch Ministry of Security and Justice (Ministerie van Veiligheid en Justitie), on behalf of the National Coordinator for Counterterrorism and Security (Nationaal Coördinator Terrorismebestrijding en Veiligheid,NCTV). The research team at RAND Europe was complemented by applied ethics expert Anke van Gorp from the Research Centre for Social Innovation (Kenniscentrum Sociale Innovatie) at Hogeschool Utrecht. The study provides an inventory of methods to support ethical decision-making in counterterrorism, drawing on the experience of other public sectors – healthcare, social work, policing and intelligence – and multiple countries, primarily the Netherlands and the United Kingdom
DOCUMENT
The guidance offered here is intended to assist social workers in thinking through the specific ethical challenges that arise whilst practising during a pandemic or other type of crisis. In crisis conditions, people who need social work services, and social workers themselves, face increased and unusual risks. These challenging conditions are further compounded by scarce or reallocated governmental and social resources. While the ethical principles underpinning social work remain unchanged by crises, unique and evolving circumstances may demand that they be prioritised differently. A decision or action that might be regarded as ethically wrong in ‘normal’ times, may be judged to be right in a time of crisis. Examples include: prioritising individual and public health considerations by restricting people’s freedom of movement; not consulting people about treatment and services; or avoiding face-to-face meetings.
MULTIFILE
In this project, we explore how healthcare providers and the creative industry can collaborate to develop effective digital mental health interventions, particularly for survivors of sexual assault. Sexual assault victims face significant barriers to seeking professional help, including shame, self-blame, and fear of judgment. With over 100,000 cases reported annually in the Netherlands the need for accessible, stigma-free support is urgent. Digital interventions, such as chatbots, offer a promising solution by providing a safe, confidential, and cost-effective space for victims to share their experiences before seeking professional care. However, existing commercial AI chatbots remain unsuitable for complex mental health support. While widely used for general health inquiries and basic therapy, they lack the human qualities essential for empathetic conversations. Additionally, training AI for this sensitive context is challenging due to limited caregiver-patient conversation data. A key concern raised by professionals worldwide is the risk of AI-driven chatbots being misused as therapy substitutes. Without proper safeguards, they may offer inappropriate responses, potentially harming users. This highlights the urgent need for strict design guidelines, robust safety measures, and comprehensive oversight in AI-based mental health solutions. To address these challenges, this project brings together experts from healthcare and design fields—especially conversation designers—to explore the power of design in developing a trustworthy, user-centered chatbot experience tailored to survivors' needs. Through an iterative process of research, co-creation, prototyping, and evaluation, we aim to integrate safe and effective digital support into mental healthcare. Our overarching goal is to bridge the gap between digital healthcare and the creative sector, fostering long-term collaboration. By combining clinical expertise with design innovation, we seek to develop personalized tools that ethically and effectively support individuals with mental health problems.
Smart city technologies, including artificial intelligence and computer vision, promise to bring a higher quality of life and more efficiently managed cities. However, developers, designers, and professionals working in urban management have started to realize that implementing these technologies poses numerous ethical challenges. Policy papers now call for human and public values in tech development, ethics guidelines for trustworthy A.I., and cities for digital rights. In a democratic society, these technologies should be understandable for citizens (transparency) and open for scrutiny and critique (accountability). When implementing such public values in smart city technologies, professionals face numerous knowledge gaps. Public administrators find it difficult to translate abstract values like transparency into concrete specifications to design new services. In the private sector, developers and designers still lack a ‘design vocabulary’ and exemplary projects that can inspire them to respond to transparency and accountability demands. Finally, both the public and private sectors see a need to include the public in the development of smart city technologies but haven’t found the right methods. This proposal aims to help these professionals to develop an integrated, value-based and multi-stakeholder design approach for the ethical implementation of smart city technologies. It does so by setting up a research-through-design trajectory to develop a prototype for an ethical ‘scan car’, as a concrete and urgent example for the deployment of computer vision and algorithmic governance in public space. Three (practical) knowledge gaps will be addressed. With civil servants at municipalities, we will create methods enabling them to translate public values such as transparency into concrete specifications and evaluation criteria. With designers, we will explore methods and patterns to answer these value-based requirements. Finally, we will further develop methods to engage civil society in this processes.