Deze handreiking is ontwikkeld voor designers en ontwikkelaars van AI-systemen, met als doel om te zorgen dat deze systemen voldoende uitlegbaar zijn. Voldoende betekent hier dat het voldoet aan de wettelijke eisen vanuit AI Act en AVG en dat gebruikers het systeem goed kunnen gebruiken. In deze handreiking leggen we ten eerste uit wat de eisen zijn die er wettelijk gelden voor uitlegbaarheid van AI-systemen. Deze zijn afkomstig uit de AVG en de AI-Act. Vervolgens leggen we uit hoe AI gebruikt wordt in de financiële sector en werken één probleem in detail uit. Voor dit probleem laten we vervolgens zien hoe de user interface aangepast kan worden om de AI uitlegbaar te maken. Deze ontwerpen dienen als prototypische voorbeelden die aangepast kunnen worden op nieuwe problemen. Deze handreiking is gebaseerd op uitlegbaarheid van AI-systemen voor de financiële sector. De adviezen kunnen echter ook gebruikt worden in andere sectoren.
MULTIFILE
BACKGROUND: Non-use of and dissatisfaction with ankle foot orthoses (AFOs) occurs frequently. The objective of this study is to gain insight in the conversation during the intake and examination phase, from the clients’ perspective, at two levels: 1) the attention for the activities and the context in which these activities take place, and 2) the quality of the conversation. METHODOLOGY: Semi-structured interviews were performed with 12 AFO users within a two-week period following intake and examination. In these interviews, and subsequent data analysis, extra attention was paid to the needs and wishes of the user, the desired activities and the environments in which these activities take place. RESULTS AND CONCLUSION: Activities and environments were seldom inquired about or discussed during the intake and examination phase. Also, activities were not placed in the context of their specific environment. As a result, profundity lacks. Consequently, orthotists based their designs on a ‘reduced reality’ because important and valuable contextual information that might benefit prescription and design of assistive devices was missed. A model is presented for mapping user activities and user environments in a systematic way. The term ‘user practices’ is introduced to emphasise the concept of activities within a specific environment.
LINK
Technology can assist older adults to remain living in the community. Within the realm of information and communication technologies, smart homes are drifting toward the concept of ambient assisted living (AAL). AAL-systems are more responsive to user needs and patterns of living, fostering physical activity for a healthier lifestyle, and capturing behaviours for prevention and future assistance. This study provides an overview of the design-requirements and expectations towards AAL-technologies that are formulated by the end-users, their relatives and health care workers, with a primary focus on health care in The Netherlands. The results concern the motivation for use of technology, requirements to the design, implementation, privacy and ethics. More research is required in terms of the actual needs of older users without dementia and their carers, and on AAL in general as some of the work included concerns less sophisticated smart home technology
DOCUMENT
The user’s experience with a recommender system is significantly shaped by the dynamics of user-algorithm interactions. These interactions are often evaluated using interaction qualities, such as controllability, trust, and autonomy, to gauge their impact. As part of our effort to systematically categorize these evaluations, we explored the suitability of the interaction qualities framework as proposed by Lenz, Dieffenbach and Hassenzahl. During this examination, we uncovered four challenges within the framework itself, and an additional external challenge. In studies examining the interaction between user control options and interaction qualities, interdependencies between concepts, inconsistent terminology, and the entity perspective (is it a user’s trust or a system’s trustworthiness) often hinder a systematic inventory of the findings. Additionally, our discussion underscored the crucial role of the decision context in evaluating the relation of algorithmic affordances and interaction qualities. We propose dimensions of decision contexts (such as ‘reversibility of the decision’, or ‘time pressure’). They could aid in establishing a systematic three-way relationship between context attributes, attributes of user control mechanisms, and experiential goals, and as such they warrant further research. In sum, while the interaction qualities framework serves as a foundational structure for organizing research on evaluating the impact of algorithmic affordances, challenges related to interdependencies and context-specific influences remain. These challenges necessitate further investigation and subsequent refinement and expansion of the framework.
LINK
This poster sketches the outlines of a theoretical research framework to assess whether and on what grounds certain behavioral effects may be attributed to particular game mechanics and game play aspects. It is founded on the Elaboration Likelihood Model of Persuasion (ELM), which is quite appropriate to guide the evaluation structure for interventions that either aim at short term or long term attitude and behavior change.
DOCUMENT
Purpose Building services technologies such as home automation systems and remote monitoring are increasingly used to support people in their own homes. In order for these technologies to be fully appreciated by the endusers (mainly older care recipients, informal carers and care professionals), user needs should be understood1,2. In other words, supply and demand should match. Steele et al.3 state that there is a shortage of studies exploring perceptions of older users towards technology and the acceptance or rejection thereof. This paper presents an overview of user needs in relation to ambient assisted living (AAL) projects, which aim to support ageing-in-place in The Netherlands. Method A literature survey was made of Dutch AAL projects, focusing on user needs. A total of 7 projects concerned with older persons, with and without dementia, were included in the overview. Results & Discussion By and large technology is considered to be a great support in enabling people to age-in-place. Technology is, therefore, accepted and even embraced by many of the end-users and their relatives. Technology used for safety, security, and emergency response is most valued. Involvement of end-users improves the successful implementation of ambient technology. This is also true for family involvement in the case of persons with dementia. Privacy is mainly a concern for care professionals. This group is also key to successful implementation, as they need to be able to work with the technology and provide information to the end-users. Ambient technologies should be designed in an unobtrusive way, in keeping with indoor design, and be usable by persons with sensory of physical impairments. In general, user needs, particularly the needs of informal carers and care professionals, are an understudied topic. These latter two groups play an important role in implementation and acceptance among care recipients. They should, therefore, deserve more attention from the research community.
LINK
As interactive systems become increasingly complex and entwined with the environment, technology is becoming more and more invisible. This means that much of the technology that people come across every day goes unnoticed and that the (potential) workings of ambient systems are not always clearly communicated to the user. The projects discussed in this paper are aimed at increasing public understanding of the existence, workings and potential of screens and ambient technology by visualizing its potential. To address issues and implications of visibility and system transparency, this paper presents work in progress as example cases for engaging people in ambient monitoring and public screening. This includes exploring desired scenarios for ambient monitoring with users as diverse as elderly people or tourists and an interactive tool for mapping public screens.
DOCUMENT
Purpose: The aims of this study were to investigate how a variety of research methods is commonly employed to study technology and practitioner cognition. User-interface issues with infusion pumps were selected as a case because of its relevance to patient safety. Methods: Starting from a Cognitive Systems Engineering perspective, we developed an Impact Flow Diagram showing the relationship of computer technology, cognition, practitioner behavior, and system failure in the area of medical infusion devices. We subsequently conducted a systematic literature review on user-interface issues with infusion pumps, categorized the studies in terms of methods employed, and noted the usability problems found with particular methods. Next, we assigned usability problems and related methods to the levels in the Impact Flow Diagram. Results: Most study methods used to find user interface issues with infusion pumps focused on observable behavior rather than on how artifacts shape cognition and collaboration. A concerted and theorydriven application of these methods when testing infusion pumps is lacking in the literature. Detailed analysis of one case study provided an illustration of how to apply the Impact Flow Diagram, as well as how the scope of analysis may be broadened to include organizational and regulatory factors. Conclusion: Research methods to uncover use problems with technology may be used in many ways, with many different foci. We advocate the adoption of an Impact Flow Diagram perspective rather than merely focusing on usability issues in isolation. Truly advancing patient safety requires the systematic adoption of a systems perspective viewing people and technology as an ensemble, also in the design of medical device technology.
DOCUMENT
Vehicle2Grid is a new charging strategy that allows for charging and discharging of Plug-In Hybrid Electric Vehicles (PHEV) and Full Electric Vehicles (FEV). The discharged energy can be supplied back to the (local) energy grid, enabling for grid alleviation, but can also be supplied back to the household in the case of a Vehicle2Home connection. Vehicle2Grid is an innovative and complex systems that requires adequate input from users if the local energy grid is to fully benefit from the discharged energy. Users have to be willing for the State of Charge of their EV to be adjusted in order for the Vehicle2Grid system to actually discharge energy from the EV. However, limiting the potential range of an EV can act as a barrier for the use of a Vehicle2Grid system, as discharging might cause uncertainty and possible range anxiety. Charging and discharging an EV through the use of Vehicle2Grid is therefore expected to change user’s routines and interactions with the charging system. Yet few Vehicle2Grid studies have focused on the requirements of a Vehicle2Grid system from the perspective of the user. This paper discussed several incentives and design guidelines that focus on the interaction users have with a Vehicle2Grid system in order to optimize user engagement with the system and integrate user preferences into the complex charging strategy. Results were obtained through a brief literature study, from a focus group as well as from two Vehicle2Grid field pilots. At the end of the paper, recommendations for further research are given.
DOCUMENT
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT