Post-training quantization reduces the computational demand of Large Language Models (LLMs) but can weaken some of their capabilities. Since LLM abilities emerge with scale, smaller LLMs are more sensitive to quantization. In this paper, we explore how quantization affects smaller LLMs’ ability to perform retrieval-augmented generation (RAG), specifically in longer contexts. We chose personalization for evaluation because it is a challenging domain to perform using RAG as it requires long-context reasoning over multiple documents. We compare the original FP16 and the quantized INT4 performance of multiple 7B and 8B LLMs on two tasks while progressively increasing the number of retrieved documents to test how quantized models fare against longer contexts. To better understand the effect of retrieval, we evaluate three retrieval models in our experiments. Our findings reveal that if a 7B LLM performs the task well, quantization does not impair its performance and long-context reasoning capabilities. We conclude that it is possible to utilize RAG with quantized smaller LLMs.
MULTIFILE
Narrative structures such as the Hero’s Journey and Heroine’s Journey have long influenced how characters, themes, and roles are portrayed in storytelling. When used to guide narrative generation in systems powered by Large Language Models (LLMs), these structures may interact with model-internal biases, reinforcing traditional gender norms. This workshop examines how protagonist gender and narrative structure shape storytelling outcomes in LLM-based storytelling systems. Through hands-on experiments and guided analysis, participants will explore gender representation in LLM-generated stories, perform counterfactual modifications, and evaluate how narrative interpretations shift when character gender is altered. The workshop aims to foster interdisciplinary collaborations, inspire novel methodologies, and advance research on fair and inclusive AI-driven storytelling in games and interactive media.
LINK
This article investigates gender bias in narratives generated by Large Language Models (LLMs) through a two-phase study. Building on our existing work in narrative generation, we employ a structured methodology to analyze the influence of protagonist gender on both the generation and classification of fictional stories. In Phase 1, factual narratives were generated using six LLMs, guided by predefined narrative structures (Hero's Journey and Heroine's Journey). Gender bias was quantified through specialized metrics and statistical analyses, revealing significant disparities in protagonist gender distribution and associations with narrative archetypes. In Phase 2, counterfactual narratives were constructed by altering the protagonists’ genders while preserving all other narrative elements. These narratives were then classified by the same LLMs to assess how gender influences their interpretation of narrative structures. Results indicate that LLMs exhibit difficulty in disentangling the protagonist's gender from the narrative structure, often using gender as a heuristic to classify stories. Male protagonists in emotionally driven narratives were frequently misclassified as following the Heroine's Journey, while female protagonists in logic-driven conflicts were misclassified as adhering to the Hero's Journey. These findings provide empirical evidence of embedded gender biases in LLM-generated narratives, highlighting the need for bias mitigation strategies in AI-driven storytelling to promote diversity and inclusivity in computational narrative generation.
MULTIFILE
This field report examines initiatives in music lifelong learning through the research network Lifelong Learning in Music (LLM) of the Prince Claus Conservatoire in the Netherlands. LLM collaborates with other schools on various projects. As well, there is an innovative masters programme ‘New audiences and innovative practices’ with a variety of partners.
DOCUMENT
We examine the ideological differences in the debate surrounding large language models (LLMs) and AI regulation, focusing on the contrasting positions of the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. The study employs a humanistic HCI methodology, applying narrative theory to HCI-related topics and analyzing the political differences between FLI and DAIR, as they are brought to bear on research on LLMs. Two conceptual lenses, “existential risk” and “ongoing harm,” are applied to reveal differing perspectives on AI's societal and cultural significance. Adopting a longtermist perspective, FLI prioritizes preventing existential risks, whereas DAIR emphasizes addressing ongoing harm and human rights violations. The analysis further discusses these organizations’ stances on risk priorities, AI regulation, and attribution of responsibility, ultimately revealing the diverse ideological underpinnings of the AI and LLMs debate. Our analysis highlights the need for more studies of longtermism's impact on vulnerable populations, and we urge HCI researchers to consider the subtle yet significant differences in the discourse on LLMs.
DOCUMENT
With the proliferation of misinformation on the web, automatic methods for detecting misinformation are becoming an increasingly important subject of study. If automatic misinformation detection is applied in a real-world setting, it is necessary to validate the methods being used. Large language models (LLMs) have produced the best results among text-based methods. However, fine-tuning such a model requires a significant amount of training data, which has led to the automatic creation of large-scale misinformation detection datasets. In this paper, we explore the biases present in one such dataset for misinformation detection in English, NELA-GT-2019. We find that models are at least partly learning the stylistic and other features of different news sources rather than the features of unreliable news. Furthermore, we use SHAP to interpret the outputs of a fine-tuned LLM and validate the explanation method using our inherently interpretable baseline. We critically analyze the suitability of SHAP for text applications by comparing the outputs of SHAP to the most important features from our logistic regression models.
DOCUMENT
This exploration with ChatGPT underscores two vital lessons for human rights law education. First, the importance of reflective and critical prompting techniques that challenge it to critique its responses. Second, the potential of customizing AI tools like ChatGPT, incorporating diverse scholarly perspectives to foster a more inclusive and comprehensive understanding of human rights. It also shows the promise of using collaborative approaches to build tools that help create pluriversal approaches to the study of human rights law.
MULTIFILE
This final installment in our e-learning series offers a comprehensive look at the current impact and future potential of data science across industries. Using real-world examples like medical image analysis and operational efficiencies at Rotterdam The Hague Airport, we showcase data science’s transformative capabilities. The video also introduces the promise of Large Language Models (LLMs) such as Chat GPT and the simplification brought by Automated Machine Learning (AutoML). Emphasizing the blend of technology and human insight, we explore the evolving landscape of AI and data science for businesses.
VIDEO
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE