Although causal inference has shown great value in estimating effect sizes in, for instance, physics, medical studies, and economics, it is rarely used in sports science. Targeted Maximum Likelihood Estimation (TMLE) is a modern method for performing causal inference. TMLE is forgiving in the misspecification of the causal model and improves the estimation of effect sizes using machine-learning methods. We demonstrate the advantage of TMLE in sports science by comparing the calculated effect size with a Generalized Linear Model (GLM). In this study, we introduce TMLE and provide a roadmap for making causal inference and apply the roadmap along with the methods mentioned above in a simulation study and case study investigating the influence of substitutions on the physical performance of the entire soccer team (i.e., the effect size of substitutions on the total physical performance). We construct a causal model, a misspecified causal model, a simulation dataset, and an observed tracking dataset of individual players from 302 elite soccer matches. The simulation dataset results show that TMLE outperforms GLM in estimating the effect size of the substitutions on the total physical performance. Furthermore, TMLE is most robust against model misspecification in both the simulation and the tracking dataset. However, independent of the method used in the tracking dataset, it was found that substitutes increase the physical performance of the entire soccer team.
Over the past forty years, the use of process models in practice has grown extensively. Until twenty years ago, remarkably little was known about the factors that contribute to the human understandability of process models in practice. Since then, research has, indeed, been conducted on this important topic, by e.g. creating guidelines. Unfortunately, the suggested modelling guidelines often fail to achieve the desired effects, because they are not tied to actual experimental findings. The need arises for knowledge on what kind of visualisation of process models is perceived as understandable, in order to improve the understanding of different stakeholders. Therefore the objective of this study is to answer the question: How can process models be visually enhanced so that they facilitate a common understanding by different stakeholders? Consequently, five subresearch questions (SRQ) will be discussed, covering three studies. By combining social psychology and process models we can work towards a more human-centred and empirical-based solution to enhance the understanding of process models by the different stakeholders with visualisation.
MULTIFILE
This article investigates gender bias in narratives generated by Large Language Models (LLMs) through a two-phase study. Building on our existing work in narrative generation, we employ a structured methodology to analyze the influence of protagonist gender on both the generation and classification of fictional stories. In Phase 1, factual narratives were generated using six LLMs, guided by predefined narrative structures (Hero's Journey and Heroine's Journey). Gender bias was quantified through specialized metrics and statistical analyses, revealing significant disparities in protagonist gender distribution and associations with narrative archetypes. In Phase 2, counterfactual narratives were constructed by altering the protagonists’ genders while preserving all other narrative elements. These narratives were then classified by the same LLMs to assess how gender influences their interpretation of narrative structures. Results indicate that LLMs exhibit difficulty in disentangling the protagonist's gender from the narrative structure, often using gender as a heuristic to classify stories. Male protagonists in emotionally driven narratives were frequently misclassified as following the Heroine's Journey, while female protagonists in logic-driven conflicts were misclassified as adhering to the Hero's Journey. These findings provide empirical evidence of embedded gender biases in LLM-generated narratives, highlighting the need for bias mitigation strategies in AI-driven storytelling to promote diversity and inclusivity in computational narrative generation.
LINK