Purpose: The aims of this study were to investigate how a variety of research methods is commonly employed to study technology and practitioner cognition. User-interface issues with infusion pumps were selected as a case because of its relevance to patient safety. Methods: Starting from a Cognitive Systems Engineering perspective, we developed an Impact Flow Diagram showing the relationship of computer technology, cognition, practitioner behavior, and system failure in the area of medical infusion devices. We subsequently conducted a systematic literature review on user-interface issues with infusion pumps, categorized the studies in terms of methods employed, and noted the usability problems found with particular methods. Next, we assigned usability problems and related methods to the levels in the Impact Flow Diagram. Results: Most study methods used to find user interface issues with infusion pumps focused on observable behavior rather than on how artifacts shape cognition and collaboration. A concerted and theorydriven application of these methods when testing infusion pumps is lacking in the literature. Detailed analysis of one case study provided an illustration of how to apply the Impact Flow Diagram, as well as how the scope of analysis may be broadened to include organizational and regulatory factors. Conclusion: Research methods to uncover use problems with technology may be used in many ways, with many different foci. We advocate the adoption of an Impact Flow Diagram perspective rather than merely focusing on usability issues in isolation. Truly advancing patient safety requires the systematic adoption of a systems perspective viewing people and technology as an ensemble, also in the design of medical device technology.
DOCUMENT
Introduction: Sensor-feedback systems can be used to support people after stroke during independent practice of gait. The main aim of the study was to describe the user-centred approach to (re)design the user interface of the sensor feedback system “Stappy” for people after stroke, and share the deliverables and key observations from this process. Methods: The user-centred approach was structured around four phases (the discovery, definition, development and delivery phase) which were fundamental to the design process. Fifteen participants with cognitive and/or physical limitations participated (10 women, 2/3 older than 65). Prototypes were evaluated in multiple test rounds, consisting of 2–7 individual test sessions. Results: Seven deliverables were created: a list of design requirements, a personae, a user flow, a low-, medium- and high-fidelity prototype and the character “Stappy”. The first six deliverables were necessary tools to design the user interface, whereas the character was a solution resulting from this design process. Key observations related to “readability and contrast of visual information”, “understanding and remembering information”, “physical limitations” were confirmed by and “empathy” was additionally derived from the design process. Conclusions: The study offers a structured methodology resulting in deliverables and key observations, which can be used to (re)design meaningful user interfaces for people after stroke. Additionally, the study provides a technique that may promote “empathy” through the creation of the character Stappy. The description may provide guidance for health care professionals, researchers or designers in future user interface design projects in which existing products are redesigned for people after stroke.
DOCUMENT
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT