An increasing number of patients receive ambulance care without being conveyed to a definitive care provider. This process has been described as complex, challenging, and lacking in guideline support by EMS clinicians. The use of quality- and outcome measures among non-conveyed patients is an understudied phenomenon. Aim To identify current quality- and outcome measures for the general population of non-conveyed patients in order to describe major trends and knowledge gaps.
MULTIFILE
Background A variety of options and techniques for causing implicit and explicit motor learning have been described in the literature. The aim of the current paper was to provide clearer guidance for practitioners on how to apply motor learning in practice by exploring experts’ opinions and experiences, using the distinction between implicit and explicit motor learning as a conceptual departure point. Methods A survey was designed to collect and aggregate informed opinions and experiences from 40 international respondents who had demonstrable expertise related to motor learning in practice and/or research. The survey was administered through an online survey tool and addressed potential options and learning strategies for applying implicit and explicit motor learning. Responses were analysed in terms of consensus ( 70%) and trends ( 50%). A summary figure was developed to illustrate a taxonomy of the different learning strategies and options indicated by the experts in the survey.
MULTIFILE
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE