The Internet offers many opportunities to provide parenting support. An overview of empirical studies in this domain is lacking, and little is known about the design of web based parenting resources and their evaluations, raising questions about its position in the context of parenting intervention programs. This article is a systematic review of empirical studies (n = 75), published between 1998 and 2010, that describe resources of peer and professional online support for parents. These studies generally report positive outcomes of online parenting support. A number of recent experimental studies evaluated effects, including randomized controlled trials and quasi-experimental designs (totaling 1,615 parents and 740 children). A relatively large proportion of the studies in our sample reported a content analysis of e-mails and posts (totaling 15,059 coded messages). The results of this review show that the Internet offers a variety of opportunities for sharing peer support and consulting professionals. The field of study reflects an emphasis on online resources for parents of preschool children, concerning health topics and providing professional support. A range of technologies to facilitate online communication is applied in evaluated Web sites, although the combination of multiple components in one resource is not very common. The first generation of online resources has already changed parenting and parenting support for a large group of parents and professionals. Suggestions for future development and research are discussed.
Dissertatie met als onderwerp het ontwerp en evaluatie van de Hogere Beroepsopleidding Orthopedische Technologie in Nederland. In deze dissertatie wordt naast het ontwerp van de opleiding ingegaan op een vergelijking die is gemaakt met andere opleidingen op het gebied van hoger orthopedisch technologisch onderwijs in de wereld.
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE