Artificial Intelligence systems are more and more being introduced into first response; however, this introduction needs to be done responsibly. While generic claims on what this entails already exist, more details are required to understand the exact nature of responsible application of AI within the first response domain. The context in which AI systems are applied largely determines the ethical, legal, and societal impact and how to deal with this impact responsibly. For that reason, we empirically investigate relevant human values that are affected by the introduction of a specific AI-based Decision Aid (AIDA), a decision support system under development for Fire Services in the Netherlands. We held 10 expert group sessions and discussed the impact of AIDA on different stakeholders. This paper presents the design and implementation of the study and, as we are still in process of analyzing the sessions in detail, summarizes preliminary insights and steps forward.
MULTIFILE
The paper explores whether and under what conditions, vaccination against SARS-CoV-2 may become a mandatory requirement for employees. It includes a discussion on EU action on SARS-CoV-2 vaccination and its relevance for national level policy with emphasis on the legal basis and instruments used by the Union to persuade national authorities into action to increase vaccination uptake. The analysis then moves to the national level by focusing on the case of Hungary. Following an overview of the legal and regulatory framework for SARS-CoV-2 vaccines deployment, the analysis zooms into the sphere of employment and explores whether and how the SARS-CoV-2 vaccination may be turned into a mandatory workplace safety requirement. The paper highlights the decision of the Hungarian government to introduce compulsory vaccination for employees in the healthcare sector, and concludes with a discussion of the relevant rules and their potential, broader implications.
MULTIFILE
The HCR-20V3 is a violence risk assessment tool that is widely used in forensic clinical practice for risk management planning. The predictive value of the tool, when used in court for legal decisionmaking, is not yet intensively been studied and questions about legal admissibility may arise. This article aims to provide legal and mental health practitioners with an overview of the strengths and weaknesses of the HCR-20V3 when applied in legal settings. The HCR-20V3 is described and discussed with respect to its psychometric properties for different groups and settings. Issues involving legal admissibility and potential biases when conducting violence risk assessments with the HCR-20V3 are outlined. To explore legal admissibility challenges with respect to the HCR-20V3, we searched case law databases since 2013 from Australia, Canada, Ireland, the Netherlands, New Zealand, the UK, and the USA. In total, we found 546 cases referring to the HCR-20/HCR-20V3. In these cases, the tool was rarely challenged (4.03%), and when challenged, it never resulted in a court decision that the risk assessment was inadmissible. Finally, we provide recommendations for legal practitioners for the cross-examination of risk assessments and recommendations for mental health professionals who conduct risk assessments and report to the court. We conclude with suggestions for future research with the HCR-20V3 to strengthen the evidence base for use of the instrument in legal contexts.
The ELSA AI lab Northern Netherlands (ELSA-NN) is committed to the promotion of healthy living, working and ageing. By investigating cultural, ethical, legal, socio-political, and psychological aspects of the use of AI in different decision-makingcontexts and integrating this knowledge into an online ELSA tool, ELSA-NN aims to contribute to knowledge about trustworthy human-centric AI and development and implementation of health technology innovations, including AI, in theNorthern region.The research in ELSA-NN will focus on developing and mapping ELSA knowledge around three general concepts of importance for the development, monitoring and implementation of trustworthy and human-centric AI: availability, use,and performance. These concepts will be explored in two lines of research: 1) use case research investigating the use of different AI applications with different types of data in different decision-making contexts at different time periods duringthe life course, and 2) an exploration among stakeholders in the Northern region of needs, knowledge, (digital) health literacy, attitudes and values concerning the use of AI in decision-making for healthy living, working and ageing. Specificfocus will be on investigating low social economic status (SES) perspectives, since health disparities between high and low SES groups are growing world-wide, including in the Northern region and existing health inequalities may increase with theintroduction and use of innovative health technologies such as AI.ELSA-NN will be integrated within the AI hub Northern-Netherlands, the Health Technology Research & Innovation Cluster (HTRIC) and the Data Science Center in Health (DASH). They offer a solid base and infrastructure for the ELSA-NNconsortium, which will be extended with additional partners, especially patient/citizens, private, governmental and researchrepresentatives, to have a quadruple-helix consortium. ELSA-NN will be set-up as a learning health system in which much attention will be paid to dialogue, communication and education.