ObjectiveTo compare estimates of effect and variability resulting from standard linear regression analysis and hierarchical multilevel analysis with cross-classified multilevel analysis under various scenarios.Study design and settingWe performed a simulation study based on a data structure from an observational study in clinical mental health care. We used a Markov chain Monte Carlo approach to simulate 18 scenarios, varying sample sizes, cluster sizes, effect sizes and between group variances. For each scenario, we performed standard linear regression, multilevel regression with random intercept on patient level, multilevel regression with random intercept on nursing team level and cross-classified multilevel analysis.ResultsApplying cross-classified multilevel analyses had negligible influence on the effect estimates. However, ignoring cross-classification led to underestimation of the standard errors of the covariates at the two cross-classified levels and to invalidly narrow confidence intervals. This may lead to incorrect statistical inference. Varying sample size, cluster size, effect size and variance had no meaningful influence on these findings.ConclusionIn case of cross-classified data structures, the use of a cross-classified multilevel model helps estimating valid precision of effects, and thereby, support correct inferences.
MULTIFILE
MULTIFILE
Introduction: Given the complexity of teaching clinical reasoning to (future) healthcare professionals, the utilization of serious games has become popular for supporting clinical reasoning education. This scoping review outlines games designed to support teaching clinical reasoning in health professions education, with a specific emphasis on their alignment with the 8-step clinical reasoning cycle and the reflective practice framework, fundamental for effective learning. Methods: A scoping review using systematic searches across seven databases (PubMed, CINAHL, ERIC, PsycINFO, Scopus, Web of Science, and Embase) was conducted. Game characteristics, technical requirements, and incorporation of clinical reasoning cycle steps were analyzed. Additional game information was obtained from the authors. Results: Nineteen unique games emerged, primarily simulation and escape room genres. Most games incorporated the following clinical reasoning steps: patient consideration (step 1), cue collection (step 2), intervention (step 6), and outcome evaluation (step 7). Processing information (step 3) and understanding the patient’s problem (step 4) were less prevalent, while goal setting (step 5) and reflection (step 8) were least integrated. Conclusion: All serious games reviewed show potential for improving clinical reasoning skills, but thoughtful alignment with learning objectives and contextual factors is vital. While this study aids health professions educators in understanding how games may support teaching of clinical reasoning, further research is needed to optimize their effective use in education. Notably, most games lack explicit incorporation of all clinical reasoning cycle steps, especially reflection, limiting its role in reflective practice. Hence, we recommend prioritizing a systematic clinical reasoning model with explicit reflective steps when using serious games for teaching clinical reasoning.