Samenvatting:Bij het evalueren van gezondheidsbevordering is het van belang de beoogde doelgroep erbij te betrekken. In de praktijk wordt participatieve evaluatie echter nog onvoldoende ingezet. Om professionals te helpen de doelgroep bij de evaluatie te betrekken werd hiervoor in het kader van de JOGG-aanpak (Gezonde Jeugd, Gezonde Toekomst, voorheen: Jongeren Op Gezond Gewicht) een instrument ontwikkeld. Hoewel het samen met de JOGG-professionals is ontwikkeld, bleek dit instrument niet goed aan te sluiten bij hun behoeften. In dit artikel reflecteren we op hoe dit komt en delen we de geleerde lessen. Ongelijkwaardige samenwerking tussen onderzoekers en professionals heeft er aan bijgedragen dat praktijkbehoeften onvoldoende in het evaluatie-instrument zijn meegenomen. Daarnaast ervaren professionals zelf verschillende uitdagingen bij participatieve evaluatie, omdat de context waarin zij werken hen hierin onvoldoende faciliteert.Abstract: Participation of the target group is important in evaluating health promotion. However, in practice the use of participatory evaluation is still limited. To support professionals within the JOGG (Healthy Youth, Healthy Future, previously Youth At a Healthy Weight) rogramme with participatory evaluation a tool was developed. Although this tool was developed with professionals, it did not meet their needs. In this article we reflect on the development of the tool and share lessons learned. Unequal collaboration between researchers and professionals may have contributed to the needs of professionals being insufficiently taken into account. Additionally, professionals themselves experience challenges with participatory evaluation, because the context in which they work does not facilitate participatory evaluation.
This article explores the decision-making processes in the ongoing development of an AI-supported youth mental health app. Document analysis reveals decisions taken during the grant proposal and funding phase and reflects upon reasons why AI is incorporated in innovative youth mental health care. An innovative multilogue among the transdisciplinary team of researchers, covering AI-experts, biomedical engineers, ethicists, social scientists, psychiatrists and young experts by experience points out which decisions are taken how. This covers i) the role of a biomedical and exposomic understanding of psychiatry as compared to a phenomenological and experiential perspective, ii) the impact and limits of AI-co-creation by young experts by experience and mental health experts, and iii) the different perspectives regarding the impact of AI on autonomy, empowerment and human relationships. The multilogue does not merely highlight different steps taken during human decision-making in AI-development, it also raises awareness about the many complexities, and sometimes contradictions, when engaging in transdisciplinary work, and it points towards ethical challenges of digitalized youth mental health care.
LINK
The model of the Best Practice Unit (BPU) is a specific form of practice based research. It is a variation of the Community of Practice (CoP) as developed by Wenger, McDermott and Snyder (2002) with the specific aim to innovate a professional practice by combining learning, development and research. We have applied the model over the past 10 years in the domain of care and social welfare in the Netherlands. Characteristics of the model are: the interaction between individual and collective learning processes, the development of (new or better) working methods, and the implementation of these methods in daily practice. Multiple knowledge sources are being used: experiential knowledge, professional knowledge and scientific knowledge. Research is serving diverse purposes: articulating tacit knowledge, documenting the learning and innovation process, systematically describing the revealed or developed ways of working, and evaluating the efficacy of new methods. An analysis of 10 different research projects shows that the BPU is an effective model.