One aspect of a responsible application of Artificial Intelligence (AI) is ensuring that the operation and outputs of an AI system are understandable for non-technical users, who need to consider its recommendations in their decision making. The importance of explainable AI (XAI) is widely acknowledged; however, its practical implementation is not straightforward. In particular, it is still unclear what the requirements are of non-technical users from explanations, i.e. what makes an explanation meaningful. In this paper, we synthesize insights on meaningful explanations from a literature study and two use cases in the financial sector. We identified 30 components of meaningfulness in XAI literature. In addition, we report three themes associated with explanation needs that were central to the users in our use cases, but are not prominently described in literature: actionability, coherent narratives and context. Our results highlight the importance of narrowing the gap between theoretical and applied responsible AI.
MULTIFILE
The aim of this research is to explore the potential of Mixed Reality (MR) technologies for Operator Support in order to progress towards Industry 4.0 (I4.0) particularly for SMEs. Through a series of interventions and interviews conducted with local SMEs, potential use cases and their drawbacks have been identified. From this, insights were derived that serve as a starting point for conducting further experiments with MR technology in the smart manufacturing laboratory at the THUAS in Delft. The intervention consisted of a free form workshop in which the participants get ‘tinkering’ time to explore MR in their own work environment. The various levels of awareness were assessed in three stages: during an introductory interview, and after an instruction meeting and some ‘tinkering’. The study took place in the period from January 2022 to July 2022 with 10 local SMEs in the Netherlands. The results show that for all SMEs the awareness and understanding increased. The use cases identified by operators themselves concerned Quality Control, Diagnostics, Instruction, Specification and Improvement of Operations. Drawbacks foreseen related to Ergonomic Concerns, Resistance from operators, Technical considerations, Unavailability of MR device and an insufficient digital infrastructure to support MR in full extent. The use case most promising to the participants was further developed into a physical prototype for an ‘assisted assembly cell’ by which the aspects of ergonomics and the mentioned technical considerations could be analysed.
MULTIFILE
Het is inmiddels breed geaccepteerd dat beslissingen die door AI-systemen worden genomen, uitlegbaar moeten zijn aan hun gebruikers. Toch blijft het in de praktijk vaak onduidelijk hoe die uitlegbaarheid concreet vorm moet krijgen. Vooral voor niet-technische gebruikers, zoals scha-debeoordelaars bij verzekeringsmaatschappijen, is het essentieel dat zij de beslissingen van een AI-systeem goed kunnen begrijpen én kunnen uitleggen aan klanten. Denk bijvoorbeeld aan het toelichten van een afgewezen schadeclaim of leningaanvraag. Hoewel het belang van verklaarbare AI algemeen wordt erkend, ontbreekt het vaak aan praktische handvatten om dit te realiseren. Daarom hebben we in deze handreiking inzichten samengebracht uit twee use cases binnen de financiële sector én uit een uitgebreide literatuurstudie. Hieruit zijn 30 aspecten van betekenisvolle uitleg van AI voortgekomen. Op basis van deze aspecten is een checklist ontwikkeld die AI-ontwikkelaars helpt om hun systemen beter uitlegbaar te maken. De checklist biedt niet alleen inzicht in hoeverre een AI-toe-passing op dit moment begrijpelijk is voor eindgebruikers, maar maakt ook duidelijk waar nog verbeterpunten liggen.
DOCUMENT
One aspect of a responsible application of Artificial Intelligence (AI) is ensuring that the operation and outputs of an AI system are understandable for non-technical users, who need to consider its recommendations in their decision making. The importance of explainable AI (XAI) is widely acknowledged; however, its practical implementation is not straightforward. In particular, it is still unclear what the requirements are of non-technical users from explanations, i.e. what makes an explanation meaningful. In this paper, we synthesize insights on meaningful explanations from a literature study and two use cases in the financial sector. We identified 30 components of meaningfulness in XAI literature. In addition, we report three themes associated with explanation needs that were central to the users in our use cases, but are not prominently described in literature: actionability, coherent narratives and context. Our results highlight the importance of narrowing the gap between theoretical and applied responsible AI.
MULTIFILE
Abstract: Technological innovation in the healthcare sector is increasing, but integration of information technology (IT) in the care process is difficult. Healthcare workers are important agents in this IT integration. The purpose of this study is to explore factors that feed motivation to use IT. Self-determination theory (SDT) is applied to study how motivational factors impact effective IT use among frontline caregivers in residential care settings. As the team is very important to these caregivers, the team is our unit of analysis. In an embedded single case study design, interviews were conducted with all nine members of a team effectively using IT. All three basic psychological needs from SDT - autonomy, competence and relatedness - were found to have impact on effective IT use, though autonomy was primarily experienced at team level. Conversely, the effective use of an IT collaboration tool influences relatedness.
MULTIFILE
Organizations feel an urgency to develop and implement applications based on foundation models: AI-models that have been trained on large-scale general data and can be finetuned to domain-specific tasks. In this process organizations face many questions, regarding model training and deployment, but also concerning added business value, implementation risks and governance. They express a need for guidance to answer these questions in a suitable and responsible way. We intend to offer such guidance by the question matrix presented in this paper. The question matrix is adjusted from the model card, to match well with development of AIapplications rather than AI-models. First pilots with the question matrix revealed that it elicited discussions among developers and helped developers explicate their choices and intentions during development.
MULTIFILE
It is now widely accepted that decisions made by AI systems must be explainable to their users. However, in practice, it often remains unclear how this explainability should be concretely implemented. This is especially important for nontechnical users, such as claims assessors at insurance companies, who need to understand AI system decisions and be able to explain them to customers. Think, for example, of explaining a rejected insurance claim or loan application. Although the importance of explainable AI is broadly recognized, there is often a lack of practical tools to achieve it. That’s why, in this handbook, we have combined insights from two use cases in the financial sector with findings from an extensive literature review. This has led to the identification of 30 key aspects of meaningful AI explanations. Based on these aspects, we developed a checklist to help AI developers make their systems more explainable. The checklist not only provides insight into how understandable an AI application currently is for end users, but also highlights areas for improvement.
DOCUMENT
Technology in general, and assistive technology in particular, is considered to be a promising opportunity to address the challenges of an aging population. Nevertheless, in health care, technology is not as widely used as could be expected. In this chapter, an overview is given of theories and models that help to understand this phenomenon. First, the design of (assistive) technologies will be addressed and the importance of human-centered design in the development of new assistive devices will be discussed. Also theories and models are addressed about technology acceptance in general. Specific attention will be given to technology acceptance in healthcare professionals, and the implementation of technology within healthcare organizations. The chapter will be based on the state of the art of scientific literature and will be illustrated with examples from our research in daily practice considering the different perspectives of involved stakeholders.
LINK
In Western Europe, cities that host International Organizations (IOs) have to deal with more and more competition. The last decade many IOs settled in Eastern European and Asian countries. Distributing IOs over several cities in Europe for reasons of political balance and give-and-take among governments play a role in these decisions. However, public policy networks are more and more operational in these negotiations. Apart from the political and administrative actors, others – as private actors and external lobbyists – play a role as well. This often leads to increased complexity and ineffective decisions. This paper examines four cases in which political gameplay influenced the location decision-making of IOs in The Hague and Geneva. First, I will introduce the subject, research method and the four cases. Second, I will discuss how public policy networks are increasingly complicating factors to the settling processes of IOs. Third, a reconstruction of the settlement processes of four IOs will illustrate this.
DOCUMENT