There is mounting evidence that efforts to mitigate the adverse effects of human activity on climate and biodiversity have so far been unsuccessful. Explanations for this failure point to a number of factors discussed in this article. While acknowledging cognitive dissonance as a significant contributing factor to continuing unsustainable practices, this article seeks to explore hegemonic rationality of industrial expansion and economic growth and resulting politics of denial. These politics promote the economic rationale for exploitation of the environment, with pursuit of material wealth seen as the most rational goal. Framed this way, this rationality is presented by political and corporate decision-makers as common sense and continuous environmentally destructive behavior is justified under the guise of consumer choices, hampering meaningful action for sustainable change. This article underlines forms of alternative rationality, namely, non-utilitarian and non-hierarchical worldview of environmental and human flourishing, that can advance sustainability. LinkedIn: https://www.linkedin.com/in/helenkopnina/
This white paper is the result of a research project by Hogeschool Utrecht, Floryn, Researchable, and De Volksbank in the period November 2021-November 2022. The research project was a KIEM project1 granted by the Taskforce for Applied Research SIA. The goal of the research project was to identify the aspects that play a role in the implementation of the explainability of artificial intelligence (AI) systems in the Dutch financial sector. In this white paper, we present a checklist of the aspects that we derived from this research. The checklist contains checkpoints and related questions that need consideration to make explainability-related choices in different stages of the AI lifecycle. The goal of the checklist is to give designers and developers of AI systems a tool to ensure the AI system will give proper and meaningful explanations to each stakeholder.
MULTIFILE
One aspect of a responsible application of Artificial Intelligence (AI) is ensuring that the operation and outputs of an AI system are understandable for non-technical users, who need to consider its recommendations in their decision making. The importance of explainable AI (XAI) is widely acknowledged; however, its practical implementation is not straightforward. In particular, it is still unclear what the requirements are of non-technical users from explanations, i.e. what makes an explanation meaningful. In this paper, we synthesize insights on meaningful explanations from a literature study and two use cases in the financial sector. We identified 30 components of meaningfulness in XAI literature. In addition, we report three themes associated with explanation needs that were central to the users in our use cases, but are not prominently described in literature: actionability, coherent narratives and context. Our results highlight the importance of narrowing the gap between theoretical and applied responsible AI.
MULTIFILE