BACKGROUND: To evaluate whether a training programme is a feasible approach to facilitate occupational health professionals' (OHPs) use of knowledge and skills provided by a guideline.METHODS: Feasibility was evaluated by researching three aspects: 'acceptability', 'implementation' and 'limited efficacy'. Statements on acceptability and implementation were rated by OHPs on 10-point visual analogue scales after following the training programme (T2). Answers were analysed using descriptive statistics. Barriers to and facilitators of implementation were explored through open-ended questions at T2, which were qualitatively analysed. Limited efficacy was evaluated by measuring the level of knowledge and skills at baseline (T0), after reading the guideline (T1) and directly after completing the training programme (T2). Increase in knowledge and skills was analysed using a non-paramatric Friedman test and post-hoc Wilcoxon signed rank tests (two-tailed).RESULTS: The 38 OHPs found the training programme acceptable, judging that it was relevant (M: 8, SD: 1), increased their capability (M: 7, SD: 1), adhered to their daily practice (M: 8, SD: 1) and enhanced their guidance and assessment of people with a chronic disease (M: 8, SD: 1). OHPs found that it was feasible to implement the programme on a larger scale (M: 7, SD: 1) but foresaw barriers such as 'time', 'money' and organizational constraints. The reported facilitators were primarily related to the added value of the knowledge and skills to the OHPs' guidance and assessment, and that the programme taught them to apply the evidence in practice. Regarding limited efficacy, a significant increase was seen in OHPs' knowledge and skills over time (X2 (2) = 53.656, p < 0.001), with the median score improving from 6.3 (T0), 8.3 (T1) and 12.3 (T2). Post-hoc tests indicated a significant improvement between T0 and T1 (p < 0.001) and between T1 and T2 (p < 0.001).CONCLUSIONS: The training programme was found to be a feasible approach to facilitate OHPs' use of knowledge and skills provided by the guideline, from the perspective of OHPs generally (acceptability and implementation) and with respect to their increase in knowledge and skills in particular (limited efficacy).
DOCUMENT
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT