From diagnosis to patient scheduling, AI is increasingly being considered across different clinical applications. Despite increasingly powerful clinical AI, uptake into actual clinical workflows remains limited. One of the major challenges is developing appropriate trust with clinicians. In this paper, we investigate trust in clinical AI in a wider perspective beyond user interactions with the AI. We offer several points in the clinical AI development, usage, and monitoring process that can have a significant impact on trust. We argue that the calibration of trust in AI should go beyond explainable AI and focus on the entire process of clinical AI deployment. We illustrate our argument with case studies from practitioners implementing clinical AI in practice to show how trust can be affected by different stages in the deployment cycle.
Although learning analytics benefit learning, its uptake by higher educational institutions remains low. Adopting learning analytics is a complex undertaking, and higher educational institutions lack insight into how to build organizational capabilities to successfully adopt learning analytics at scale. This paper describes the ex-post evaluation of a capability model for learning analytics via a mixed-method approach. The model intends to help practitioners such as program managers, policymakers, and senior management by providing them a comprehensive overview of necessary capabilities and their operationalization. Qualitative data were collected during pluralistic walk-throughs with 26 participants at five educational institutions and a group discussion with seven learning analytics experts. Quantitative data about the model’s perceived usefulness and ease-of-use was collected via a survey (n = 23). The study’s outcomes show that the model helps practitioners to plan learning analytics adoption at their higher educational institutions. The study also shows the applicability of pluralistic walk-throughs as a method for ex-post evaluation of Design Science Research artefacts.
LINK
Introduction: Given the complexity of teaching clinical reasoning to (future) healthcare professionals, the utilization of serious games has become popular for supporting clinical reasoning education. This scoping review outlines games designed to support teaching clinical reasoning in health professions education, with a specific emphasis on their alignment with the 8-step clinical reasoning cycle and the reflective practice framework, fundamental for effective learning. Methods: A scoping review using systematic searches across seven databases (PubMed, CINAHL, ERIC, PsycINFO, Scopus, Web of Science, and Embase) was conducted. Game characteristics, technical requirements, and incorporation of clinical reasoning cycle steps were analyzed. Additional game information was obtained from the authors. Results: Nineteen unique games emerged, primarily simulation and escape room genres. Most games incorporated the following clinical reasoning steps: patient consideration (step 1), cue collection (step 2), intervention (step 6), and outcome evaluation (step 7). Processing information (step 3) and understanding the patient’s problem (step 4) were less prevalent, while goal setting (step 5) and reflection (step 8) were least integrated. Conclusion: All serious games reviewed show potential for improving clinical reasoning skills, but thoughtful alignment with learning objectives and contextual factors is vital. While this study aids health professions educators in understanding how games may support teaching of clinical reasoning, further research is needed to optimize their effective use in education. Notably, most games lack explicit incorporation of all clinical reasoning cycle steps, especially reflection, limiting its role in reflective practice. Hence, we recommend prioritizing a systematic clinical reasoning model with explicit reflective steps when using serious games for teaching clinical reasoning.