DOCUMENT
With the proliferation of misinformation on the web, automatic misinformation detection methods are becoming an increasingly important subject of study. Large language models have produced the best results among content-based methods, which rely on the text of the article rather than the metadata or network features. However, finetuning such a model requires significant training data, which has led to the automatic creation of large-scale misinformation detection datasets. In these datasets, articles are not labelled directly. Rather, each news site is labelled for reliability by an established fact-checking organisation and every article is subsequently assigned the corresponding label based on the reliability score of the news source in question. A recent paper has explored the biases present in one such dataset, NELA-GT-2018, and shown that the models are at least partly learning the stylistic and other features of different news sources rather than the features of unreliable news. We confirm a part of their findings. Apart from studying the characteristics and potential biases of the datasets, we also find it important to examine in what way the model architecture influences the results. We therefore explore which text features or combinations of features are learned by models based on contextual word embeddings as opposed to basic bag-of-words models. To elucidate this, we perform extensive error analysis aided by the SHAP post-hoc explanation technique on a debiased portion of the dataset. We validate the explanation technique on our inherently interpretable baseline model.
DOCUMENT
This final installment in our e-learning series offers a comprehensive look at the current impact and future potential of data science across industries. Using real-world examples like medical image analysis and operational efficiencies at Rotterdam The Hague Airport, we showcase data science’s transformative capabilities. The video also introduces the promise of Large Language Models (LLMs) such as Chat GPT and the simplification brought by Automated Machine Learning (AutoML). Emphasizing the blend of technology and human insight, we explore the evolving landscape of AI and data science for businesses.
VIDEO
Narrative structures such as the Hero’s Journey and Heroine’s Journey have long influenced how characters, themes, and roles are portrayed in storytelling. When used to guide narrative generation in systems powered by Large Language Models (LLMs), these structures may interact with model-internal biases, reinforcing traditional gender norms. This workshop examines how protagonist gender and narrative structure shape storytelling outcomes in LLM-based storytelling systems. Through hands-on experiments and guided analysis, participants will explore gender representation in LLM-generated stories, perform counterfactual modifications, and evaluate how narrative interpretations shift when character gender is altered. The workshop aims to foster interdisciplinary collaborations, inspire novel methodologies, and advance research on fair and inclusive AI-driven storytelling in games and interactive media.
LINK
Post-training quantization reduces the computational demand of Large Language Models (LLMs) but can weaken some of their capabilities. Since LLM abilities emerge with scale, smaller LLMs are more sensitive to quantization. In this paper, we explore how quantization affects smaller LLMs’ ability to perform retrieval-augmented generation (RAG), specifically in longer contexts. We chose personalization for evaluation because it is a challenging domain to perform using RAG as it requires long-context reasoning over multiple documents. We compare the original FP16 and the quantized INT4 performance of multiple 7B and 8B LLMs on two tasks while progressively increasing the number of retrieved documents to test how quantized models fare against longer contexts. To better understand the effect of retrieval, we evaluate three retrieval models in our experiments. Our findings reveal that if a 7B LLM performs the task well, quantization does not impair its performance and long-context reasoning capabilities. We conclude that it is possible to utilize RAG with quantized smaller LLMs.
MULTIFILE
This article investigates gender bias in narratives generated by Large Language Models (LLMs) through a two-phase study. Building on our existing work in narrative generation, we employ a structured methodology to analyze the influence of protagonist gender on both the generation and classification of fictional stories. In Phase 1, factual narratives were generated using six LLMs, guided by predefined narrative structures (Hero's Journey and Heroine's Journey). Gender bias was quantified through specialized metrics and statistical analyses, revealing significant disparities in protagonist gender distribution and associations with narrative archetypes. In Phase 2, counterfactual narratives were constructed by altering the protagonists’ genders while preserving all other narrative elements. These narratives were then classified by the same LLMs to assess how gender influences their interpretation of narrative structures. Results indicate that LLMs exhibit difficulty in disentangling the protagonist's gender from the narrative structure, often using gender as a heuristic to classify stories. Male protagonists in emotionally driven narratives were frequently misclassified as following the Heroine's Journey, while female protagonists in logic-driven conflicts were misclassified as adhering to the Hero's Journey. These findings provide empirical evidence of embedded gender biases in LLM-generated narratives, highlighting the need for bias mitigation strategies in AI-driven storytelling to promote diversity and inclusivity in computational narrative generation.
MULTIFILE
We examine the ideological differences in the debate surrounding large language models (LLMs) and AI regulation, focusing on the contrasting positions of the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. The study employs a humanistic HCI methodology, applying narrative theory to HCI-related topics and analyzing the political differences between FLI and DAIR, as they are brought to bear on research on LLMs. Two conceptual lenses, “existential risk” and “ongoing harm,” are applied to reveal differing perspectives on AI's societal and cultural significance. Adopting a longtermist perspective, FLI prioritizes preventing existential risks, whereas DAIR emphasizes addressing ongoing harm and human rights violations. The analysis further discusses these organizations’ stances on risk priorities, AI regulation, and attribution of responsibility, ultimately revealing the diverse ideological underpinnings of the AI and LLMs debate. Our analysis highlights the need for more studies of longtermism's impact on vulnerable populations, and we urge HCI researchers to consider the subtle yet significant differences in the discourse on LLMs.
DOCUMENT
Background: Collaboration between Speech and Language Therapists (SLTs) and parents is considered best practice for children with developmental disorders. However, such collaborative approach is not yet implemented in therapy for children with developmental language disorders (DLD) in the Netherlands. Improving Dutch SLTs’ collaboration with parents requires insight in factors that influence the way SLTs work with parents. Aims: To explore the specific beliefs of Dutch SLTs that influence how they collaborate with parents of children with DLD. Methods and procedures: We conducted three online focus groups with 17 SLTs using a reflection tool and fictional examples of parents to prompt their thoughts, feelings and actions on specific scenarios. Data were organised using the Theoretical Domains Framework (TDF). Outcomes and results: We identified 34 specific beliefs across nine TDF domains on how SLTs collaborate with parents of children with DLD. The results indicate that SLTs hold beliefs on how to support SLTs in collaborating with parents but also conflicting specific beliefs regarding collaborative work with parents. The latter relate to SLTs’ perspectives on their professional role and identity, their approach towards parents, and their confidence and competence in working collaboratively with parents.
DOCUMENT
To study the ways in which compounds can induce adverse effects, toxicologists have been constructing Adverse Outcome Pathways (AOPs). An AOP can be considered as a pragmatic tool to capture and visualize mechanisms underlying different types of toxicity inflicted by any kind of stressor, and describes the interactions between key entities that lead to the adverse outcome on multiple biological levels of organization. The construction or optimization of an AOP is a labor intensive process, which currently depends on the manual search, collection, reviewing and synthesis of available scientific literature. This process could however be largely facilitated using Natural Language Processing (NLP) to extract information contained in scientific literature in a systematic, objective, and rapid manner that would lead to greater accuracy and reproducibility. This would support researchers to invest their expertise in the substantive assessment of the AOPs by replacing the time spent on evidence gathering by a critical review of the data extracted by NLP. As case examples, we selected two frequent adversities observed in the liver: namely, cholestasis and steatosis denoting accumulation of bile and lipid, respectively. We used deep learning language models to recognize entities of interest in text and establish causal relationships between them. We demonstrate how an NLP pipeline combining Named Entity Recognition and a simple rules-based relationship extraction model helps screen compounds related to liver adversities in the literature, but also extract mechanistic information for how such adversities develop, from the molecular to the organismal level. Finally, we provide some perspectives opened by the recent progress in Large Language Models and how these could be used in the future. We propose this work brings two main contributions: 1) a proof-of-concept that NLP can support the extraction of information from text for modern toxicology and 2) a template open-source model for recognition of toxicological entities and extraction of their relationships. All resources are openly accessible via GitHub (https://github.com/ontox-project/en-tox).
DOCUMENT