Staatssecretaris Van Huffelen (Digitale Zaken) wil ambtenaren verplichten een ‘mensenrechten-impact-assessment’ uit te voeren bij nieuwe én bestaande algoritmes. Een goede ontwikkeling, vinden Quirine Eijkman en collega’s van het College voor de Rechten van de Mens. Maar dan met bindende discriminatietoets.
LINK
The realization of human rights standards depends in part on the commitment of local actors. It can be argued that local public service professionals such as social workers can also be regarded as key players. The possible role of social workers becomes imperative if these professionals are working in a policy context that is not congruent with human rights. If existing laws or policies cause or maintain disrespect for human rights, social workers are in a position to observe that this is having an adverse impact on clients. When social workers are regarded as human rights actors, the question arises how they can or should respond to law and policy that impedes them in carrying out their work with respect for human rights. This article adds to existing theories on social workers as human rights actors by examining the practices of social professionals working in such a challenging policy context. The research took place among professionals in social district teams in the city of Utrecht, the Netherlands. Following a series of decentralizations and austerity measures the social care landscape in the Netherlands has changed drastically over the last few years. As a result, social workers may find themselves on the one hand trying to realize the best possible care for their clients while on the other hand dealing with new laws and policy expectations focused on self-reliance and diminished access to specialist care. The article explores how social professionals’ responses to barriers in access to care affect human rights requirements. In doing so, this socio-legal study provides insight into the ways in which everyday social work relates to the realization of human rights at the local level.
DOCUMENT
This white paper is the result of a research project by Hogeschool Utrecht, Floryn, Researchable, and De Volksbank in the period November 2021-November 2022. The research project was a KIEM project1 granted by the Taskforce for Applied Research SIA. The goal of the research project was to identify the aspects that play a role in the implementation of the explainability of artificial intelligence (AI) systems in the Dutch financial sector. In this white paper, we present a checklist of the aspects that we derived from this research. The checklist contains checkpoints and related questions that need consideration to make explainability-related choices in different stages of the AI lifecycle. The goal of the checklist is to give designers and developers of AI systems a tool to ensure the AI system will give proper and meaningful explanations to each stakeholder.
MULTIFILE