AIM To examine which instruments used to assess participation of children with acquired brain injury (ABI) or cerebral palsy (CP) align with attendance and/or involvement constructs of participation; and to systematically review measurement properties of these instruments in children with ABI or CP, to guide instrument selection. METHOD Five databases were searched. Instruments that quantified ‘attendance’ and/or ‘involvement’ aspects of participation according to the family of participation-related constructs were selected. Data on measurement properties were extracted and methodological quality of the studies assessed. RESULTS Thirty-seven instruments were used to assess participation in children with ABI or CP. Of those, 12 measured attendance and/or involvement. The reliability, validity, and responsiveness of eight of these instruments were examined in 14 studies with children with ABI or CP. Sufficient measurement properties were reported for most of the measures, but no instrument had been assessed on all relevant properties. Moreover, most psychometric studies have marked methodological limitations. INTERPRETATION Instruments to assess participation of children with ABI or CP should be selected carefully, as many available measures do not align with attendance and/or involvement. Evidence for measurement properties is limited, mainly caused by low methodological study quality. Future studies should follow recommended methodological guidelines.
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE
The typical structure of the healthcare sector involves (specialist) intertwined practices co-occurring in formal or informal networks. These practices must answer to the concerns and needs of all related stakeholders. Multimorbidity and the need to share knowledge for scientific development are among the driving factors for collaboration in healthcare. To establish and keep up a permanent collaborative link, it takes effort and understanding of the network characteristics that must be governed. It is not hard to find practices of Network Governance (NG) in a variety of industries. Still, there is a lack of insight in this subject, including knowledge on how to establish and maintain an effective healthcare network. Consequently, this study's research question is: How is network governance organized in the healthcare sector? A systematic literature study was performed to select 80 NG articles. Based on these publications the characteristics of NG are made explicit. The findings demonstrate that combinations of governance style (relational versus contractual governance) and governance structure (lead versus shared governance) lead to different network dynamics. Furthermore, the results show that in order to comprehend how networks in the healthcare sector emerge and can be regulated, it is vital to understand the current network type. Additionally, it informs us of the governing factors.
LINK