Post-training quantization reduces the computational demand of Large Language Models (LLMs) but can weaken some of their capabilities. Since LLM abilities emerge with scale, smaller LLMs are more sensitive to quantization. In this paper, we explore how quantization affects smaller LLMs’ ability to perform retrieval-augmented generation (RAG), specifically in longer contexts. We chose personalization for evaluation because it is a challenging domain to perform using RAG as it requires long-context reasoning over multiple documents. We compare the original FP16 and the quantized INT4 performance of multiple 7B and 8B LLMs on two tasks while progressively increasing the number of retrieved documents to test how quantized models fare against longer contexts. To better understand the effect of retrieval, we evaluate three retrieval models in our experiments. Our findings reveal that if a 7B LLM performs the task well, quantization does not impair its performance and long-context reasoning capabilities. We conclude that it is possible to utilize RAG with quantized smaller LLMs.
MULTIFILE
This final installment in our e-learning series offers a comprehensive look at the current impact and future potential of data science across industries. Using real-world examples like medical image analysis and operational efficiencies at Rotterdam The Hague Airport, we showcase data science’s transformative capabilities. The video also introduces the promise of Large Language Models (LLMs) such as Chat GPT and the simplification brought by Automated Machine Learning (AutoML). Emphasizing the blend of technology and human insight, we explore the evolving landscape of AI and data science for businesses.
VIDEO
DOCUMENT
Thank you for sharing this story! However, please do so in a way that respects the copyright of this text. If you want to share or reproduce this full text, please ask permission from Innovation Origins (partners@innovationorigins.com) or become a partner of ours! You are of course free to quote this story with source citation. Would you like to share this article in another way? Then use this link to the article: https://innovationorigins.com/en/silicon-sampling-ai-powered-personas-offer-new-insights-for-market-research-but-have-limitations/ n the rapidly evolving field of marketing and communication, staying ahead means embracing technological innovations. The latest breakthrough, silicon sampling, leverages AI to revolutionize market research by creating synthetic personas that mimic human responses. This method, which utilizes large language models (LLMs) like GPT-4o, offers a cost-efficient and less time-consuming alternative to traditional market research. Roberta Vaznyte and Marieke van Vliet (Fontys University of Applied Science) have explored the promise and challenges of silicon sampling, highlighting key findings from recent experiments and the implications for the future of market research.
LINK
This article investigates gender bias in narratives generated by Large Language Models (LLMs) through a two-phase study. Building on our existing work in narrative generation, we employ a structured methodology to analyze the influence of protagonist gender on both the generation and classification of fictional stories. In Phase 1, factual narratives were generated using six LLMs, guided by predefined narrative structures (Hero's Journey and Heroine's Journey). Gender bias was quantified through specialized metrics and statistical analyses, revealing significant disparities in protagonist gender distribution and associations with narrative archetypes. In Phase 2, counterfactual narratives were constructed by altering the protagonists’ genders while preserving all other narrative elements. These narratives were then classified by the same LLMs to assess how gender influences their interpretation of narrative structures. Results indicate that LLMs exhibit difficulty in disentangling the protagonist's gender from the narrative structure, often using gender as a heuristic to classify stories. Male protagonists in emotionally driven narratives were frequently misclassified as following the Heroine's Journey, while female protagonists in logic-driven conflicts were misclassified as adhering to the Hero's Journey. These findings provide empirical evidence of embedded gender biases in LLM-generated narratives, highlighting the need for bias mitigation strategies in AI-driven storytelling to promote diversity and inclusivity in computational narrative generation.
MULTIFILE
We examine the ideological differences in the debate surrounding large language models (LLMs) and AI regulation, focusing on the contrasting positions of the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. The study employs a humanistic HCI methodology, applying narrative theory to HCI-related topics and analyzing the political differences between FLI and DAIR, as they are brought to bear on research on LLMs. Two conceptual lenses, “existential risk” and “ongoing harm,” are applied to reveal differing perspectives on AI's societal and cultural significance. Adopting a longtermist perspective, FLI prioritizes preventing existential risks, whereas DAIR emphasizes addressing ongoing harm and human rights violations. The analysis further discusses these organizations’ stances on risk priorities, AI regulation, and attribution of responsibility, ultimately revealing the diverse ideological underpinnings of the AI and LLMs debate. Our analysis highlights the need for more studies of longtermism's impact on vulnerable populations, and we urge HCI researchers to consider the subtle yet significant differences in the discourse on LLMs.
DOCUMENT
Narrative structures such as the Hero’s Journey and Heroine’s Journey have long influenced how characters, themes, and roles are portrayed in storytelling. When used to guide narrative generation in systems powered by Large Language Models (LLMs), these structures may interact with model-internal biases, reinforcing traditional gender norms. This workshop examines how protagonist gender and narrative structure shape storytelling outcomes in LLM-based storytelling systems. Through hands-on experiments and guided analysis, participants will explore gender representation in LLM-generated stories, perform counterfactual modifications, and evaluate how narrative interpretations shift when character gender is altered. The workshop aims to foster interdisciplinary collaborations, inspire novel methodologies, and advance research on fair and inclusive AI-driven storytelling in games and interactive media.
LINK
In het rapport ‘Artificiële Intelligentie en passende zorg’ van het Zorginstituut Nederland worden kansen en uitdagingen van AI binnen de zorg belicht. Artificiële Intelligentie (AI) biedt ook talrijke mogelijkheden voor de medisch specialistische revalidatiezorg (MSR). Aan de hand van dit rapport beschouwen we de kansen en uitdagingen van AI voor de MSR. Samenwerking binnen de revalidatiesector is daarbij essentieel om AI-toepassingen effectief en verantwoord te integreren in de revalidatiezorg.
DOCUMENT
This exploration with ChatGPT underscores two vital lessons for human rights law education. First, the importance of reflective and critical prompting techniques that challenge it to critique its responses. Second, the potential of customizing AI tools like ChatGPT, incorporating diverse scholarly perspectives to foster a more inclusive and comprehensive understanding of human rights. It also shows the promise of using collaborative approaches to build tools that help create pluriversal approaches to the study of human rights law.
MULTIFILE
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE