In this paper, we explore the design of web-based advice robots to enhance users' confidence in acting upon the provided advice. Drawing from research on algorithm acceptance and explainable AI, we hypothesise four design principles that may encourage interactivity and exploration, thus fostering users' confidence to act. Through a value-oriented prototype experiment and valueoriented semi-structured interviews, we tested these principles, confirming three of them and identifying an additional principle. The four resulting principles: (1) put context questions and resulting advice on one page and allow live, iterative exploration, (2) use action or change oriented questions to adjust the input parameters, (3) actively offer alternative scenarios based on counterfactuals, and (4) show all options instead of only the recommended one(s), appear to contribute to the values of agency and trust. Our study integrates the Design Science Research approach with a Value Sensitive Design approach.
DOCUMENT
In this paper, we explore the design of web-based advice robots to enhance users' confidence in acting upon the provided advice. Drawing from research on algorithm acceptance and explainable AI, we hypothesise four design principles that may encourage interactivity and exploration, thus fostering users' confidence to act. Through a value-oriented prototype experiment and value-oriented semi-structured interviews, we tested these principles, confirming three of them and identifying an additional principle. The four resulting principles: (1) put context questions and resulting advice on one page and allow live, iterative exploration, (2) use action or change oriented questions to adjust the input parameters, (3) actively offer alternative scenarios based on counterfactuals, and (4) show all options instead of only the recommended one(s), appear to contribute to the values of agency and trust. Our study integrates the Design Science Research approach with a Value Sensitive Design approach.
MULTIFILE
This study provides ERP and oscillatory dynamics data associated with the comprehension of narratives involving counterfactual events. Participants were given short stories describing an initial situation ("Marta wanted to plant flowers in her garden...."), followed by a critical sentence describing a new situation in either a factual ("Since she found a spade, she started to dig a hole") or counterfactual format ("If she had found a spade, she would have started to dig a hole"), and then a continuation sentence that was either related to the initial situation ("she bought a spade") or to the new one ("she planted roses"). The ERPs recorded for the continuation sentences related to the initial situation showed larger negativity after factuals than after counterfactuals, suggesting that the counterfactual's presupposition - the events did not occur - prevents updating the here-and-now of discourse. By contrast, continuation sentences related to the new situation elicited similar ERPs under both factual and counterfactual contexts, suggesting that counterfactuals also activate momentarily an alternative "as if" meaning. However, the reduction of gamma power following counterfactuals, suggests that the "as if" meaning is not integrated into the discourse, nor does it contribute to semantic unification processes.
LINK
This article investigates gender bias in narratives generated by Large Language Models (LLMs) through a two-phase study. Building on our existing work in narrative generation, we employ a structured methodology to analyze the influence of protagonist gender on both the generation and classification of fictional stories. In Phase 1, factual narratives were generated using six LLMs, guided by predefined narrative structures (Hero's Journey and Heroine's Journey). Gender bias was quantified through specialized metrics and statistical analyses, revealing significant disparities in protagonist gender distribution and associations with narrative archetypes. In Phase 2, counterfactual narratives were constructed by altering the protagonists’ genders while preserving all other narrative elements. These narratives were then classified by the same LLMs to assess how gender influences their interpretation of narrative structures. Results indicate that LLMs exhibit difficulty in disentangling the protagonist's gender from the narrative structure, often using gender as a heuristic to classify stories. Male protagonists in emotionally driven narratives were frequently misclassified as following the Heroine's Journey, while female protagonists in logic-driven conflicts were misclassified as adhering to the Hero's Journey. These findings provide empirical evidence of embedded gender biases in LLM-generated narratives, highlighting the need for bias mitigation strategies in AI-driven storytelling to promote diversity and inclusivity in computational narrative generation.
MULTIFILE
Multilevel models using logistic regression (MLogRM) and random forest models (RFM) are increasingly deployed in industry for the purpose of binary classification. The European Commission’s proposed Artificial Intelligence Act (AIA) necessitates, under certain conditions, that application of such models is fair, transparent, and ethical, which consequently implies technical assessment of these models. This paper proposes and demonstrates an audit framework for technical assessment of RFMs and MLogRMs by focussing on model-, discrimination-, and transparency & explainability-related aspects. To measure these aspects 20 KPIs are proposed, which are paired to a traffic light risk assessment method. An open-source dataset is used to train a RFM and a MLogRM model and these KPIs are computed and compared with the traffic lights. The performance of popular explainability methods such as kernel- and tree-SHAP are assessed. The framework is expected to assist regulatory bodies in performing conformity assessments of binary classifiers and also benefits providers and users deploying such AI-systems to comply with the AIA.
DOCUMENT
Narrative structures such as the Hero’s Journey and Heroine’s Journey have long influenced how characters, themes, and roles are portrayed in storytelling. When used to guide narrative generation in systems powered by Large Language Models (LLMs), these structures may interact with model-internal biases, reinforcing traditional gender norms. This workshop examines how protagonist gender and narrative structure shape storytelling outcomes in LLM-based storytelling systems. Through hands-on experiments and guided analysis, participants will explore gender representation in LLM-generated stories, perform counterfactual modifications, and evaluate how narrative interpretations shift when character gender is altered. The workshop aims to foster interdisciplinary collaborations, inspire novel methodologies, and advance research on fair and inclusive AI-driven storytelling in games and interactive media.
LINK
It is well-documented that international enterprises are more productive. Only few studies have explored the effect of internationalization on productivity and innovation at the firm-level. Using propensity score matching we analyze the causal effects of internationalization on innovation in 10 transition economies. We distinguish between three types of internationalization: exporting, FDI, and international outsourcing. We find that internationalization causes higher levels of innovation. More specifically, we show that (i) exporting results in more R&D, higher sales from product innovation, and an increase in the number of international patents (ii) outward FDI increases R&D and international patents (iii) international outsourcing leads to higher sales from product innovation. The paper provides empirical support to the theoretical literature on heterogeneous firms in international trade that argues that middle income countries gain from trade liberalization through increases in firm productivity and innovative capabilities.
DOCUMENT
Deze handreiking is ontwikkeld voor designers en ontwikkelaars van AI-systemen, met als doel om te zorgen dat deze systemen voldoende uitlegbaar zijn. Voldoende betekent hier dat het voldoet aan de wettelijke eisen vanuit AI Act en AVG en dat gebruikers het systeem goed kunnen gebruiken. In deze handreiking leggen we ten eerste uit wat de eisen zijn die er wettelijk gelden voor uitlegbaarheid van AI-systemen. Deze zijn afkomstig uit de AVG en de AI-Act. Vervolgens leggen we uit hoe AI gebruikt wordt in de financiële sector en werken één probleem in detail uit. Voor dit probleem laten we vervolgens zien hoe de user interface aangepast kan worden om de AI uitlegbaar te maken. Deze ontwerpen dienen als prototypische voorbeelden die aangepast kunnen worden op nieuwe problemen. Deze handreiking is gebaseerd op uitlegbaarheid van AI-systemen voor de financiële sector. De adviezen kunnen echter ook gebruikt worden in andere sectoren.
MULTIFILE
There is a need for modernizing the Dutch collective management system of music copyright to match the rapidly changing digital music industry. Focusing on the often-neglected human values aspect, this study, part of a larger PhD research, examines the value preferences of music rights holders: creators and publishers. It aims to advise on technological redesign for music copyright management system and contribute to discussions on equitable collective management. Building upon prior research, which comprehensively analyzed the Dutch music copyright system and identified key stakeholders, this paper analyses 24 interviews with those key stakeholders to identify their values and potential value tensions. Initial findings establish a set of shared values, crucial for the next phases of the study –values operationalization. This research makes a academic contribution by integrating the Value Sensitive Design (VSD) approach with Distributive Justice Theory, enriching VSD's application and enhancing our understanding of the Economics of Collective Management (ECM).
MULTIFILE
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT