This article investigates gender bias in narratives generated by Large Language Models (LLMs) through a two-phase study. Building on our existing work in narrative generation, we employ a structured methodology to analyze the influence of protagonist gender on both the generation and classification of fictional stories. In Phase 1, factual narratives were generated using six LLMs, guided by predefined narrative structures (Hero's Journey and Heroine's Journey). Gender bias was quantified through specialized metrics and statistical analyses, revealing significant disparities in protagonist gender distribution and associations with narrative archetypes. In Phase 2, counterfactual narratives were constructed by altering the protagonists’ genders while preserving all other narrative elements. These narratives were then classified by the same LLMs to assess how gender influences their interpretation of narrative structures. Results indicate that LLMs exhibit difficulty in disentangling the protagonist's gender from the narrative structure, often using gender as a heuristic to classify stories. Male protagonists in emotionally driven narratives were frequently misclassified as following the Heroine's Journey, while female protagonists in logic-driven conflicts were misclassified as adhering to the Hero's Journey. These findings provide empirical evidence of embedded gender biases in LLM-generated narratives, highlighting the need for bias mitigation strategies in AI-driven storytelling to promote diversity and inclusivity in computational narrative generation.
MULTIFILE
Narrative structures such as the Hero’s Journey and Heroine’s Journey have long influenced how characters, themes, and roles are portrayed in storytelling. When used to guide narrative generation in systems powered by Large Language Models (LLMs), these structures may interact with model-internal biases, reinforcing traditional gender norms. This workshop examines how protagonist gender and narrative structure shape storytelling outcomes in LLM-based storytelling systems. Through hands-on experiments and guided analysis, participants will explore gender representation in LLM-generated stories, perform counterfactual modifications, and evaluate how narrative interpretations shift when character gender is altered. The workshop aims to foster interdisciplinary collaborations, inspire novel methodologies, and advance research on fair and inclusive AI-driven storytelling in games and interactive media.
LINK
Het woord ‘bias’ komt naar voren in zowel maatschappelijk als wetenschappelijk debat over de inzet van artificiële intelligentie (ai). Het verwijst doorgaans naar een vooroordeel dat iets of iemand vaak onbedoeld heeft. Wanneer dit vooroordeel leidt tot een afwijking in besluitvorming vergeleken met een situatie wanneer dit vooroordeel er niet zou zijn, dan is een bias doorgaans onwenselijk.
LINK