Why are risk decisions sometimes rather irrational and biased than rational and effective? Can we educate and train vocational students and professionals in safety and security management to let them make smarter risk decisions? This paper starts with a theoretical and practical analysis. From research literature and theory we develop a two-phase process model of biased risk decision making, focussing on two critical professional competences: risk intelligence and risk skill. Risk intelligence applies to risk analysis on a mainly cognitive level, whereas risk skill covers the application of risk intelligence in the ultimate phase of risk decision making: whether or not a professional risk manager decides to intervene, how and how well. According to both phases of risk analysis and risk decision making the main problems are described and illustrated with examples from safety and security practice. It seems to be all about systematically biased reckoning and reasoning.
Albeit the widespread application of recommender systems (RecSys) in our daily lives, rather limited research has been done on quantifying unfairness and biases present in such systems. Prior work largely focuses on determining whether a RecSys is discriminating or not but does not compute the amount of bias present in these systems. Biased recommendations may lead to decisions that can potentially have adverse effects on individuals, sensitive user groups, and society. Hence, it is important to quantify these biases for fair and safe commercial applications of these systems. This paper focuses on quantifying popularity bias that stems directly from the output of RecSys models, leading to over recommendation of popular items that are likely to be misaligned with user preferences. Four metrics to quantify popularity bias in RescSys over time in dynamic setting across different sensitive user groups have been proposed. These metrics have been demonstrated for four collaborative filteri ng based RecSys algorithms trained on two commonly used benchmark datasets in the literature. Results obtained show that the metrics proposed provide a comprehensive understanding of growing disparities in treatment between sensitive groups over time when used conjointly.
Prompt design can be understood similarly to query design, as a prompt aiming to understand cultural dimensions in visual research, forcing the AI to make sense of ambiguity as a way to understand its training dataset and biases ( Niederer, S. and Colombo, G., ‘Visual Methods for Digital Research’). It moves away from prompting engineering and efforts to make “code-like” prompts that suppress ambiguity and prevent the AI from bringing biases to the surface. Our idea is to keep the ambiguity present in the image descriptions like in natural language and let it flow through different stages (degrees) of the broken telephone dynamics. This way we have less control over the result or selection of the ideal result and more questions about the dynamics implicit in the biases present in the results obtained.Different from textual or mathematical results, in which prompt chains or asking the AI to explain how it got the result might be enough, images and visual methods assisted by AI demand new methods to deal with that. Exploring and developing a new approach for it is the main goal of this research project, particularly interested in possible biases and unexplored patterns in AI’s image affordances.How could we detect small biases in describing images and creating based on descriptions when it comes to AI? What exactly do the words written by AI when describing an image stand for? When detecting a ‘human’ or ‘science’, for example, what elements or archetypes are invisible between prompting, and the image created or described?Turning an AI’s image description into a new image could help us to have a glimpse behind the scenes. In the broken telephone game, small misperceptions between telling and hearing, coding and decoding, produce big divergences in the final result - and the cultural factors in between have been largely studied. To amplify and understand possible biases, we could check how this new image would be described by AI, starting a broken telephone cycle. This process could shed light not just into the gap between AI image description and its capacity to reconstruct images using this description as part of prompts, but also illuminate biases and patterns in AI image description and creation based on description.It is in line with previous projects on image clustering and image prompt analysis (see reference links), and questions such as identification of AI image biases, cross AI models analysis, reverse engineering through prompts, image clustering, and analysis of large datasets of images from online image and video-based platforms.The experiment becomes even more relevant in light of the results from recent studies (Shumailov et al., 2024) that show that AI models trained on AI generated data will eventually collapse.To frame this analysis, the proposal from Munn, Magee and Arora (2023) titled Unmaking AI Imagemaking introduces three methodological approaches for investigating AI image models: Unmaking the ecosystem, Unmaking the data and Unmaking the outputs.First, the idea of ecosystem was taken for these authors to describe socio-technical implications that surround the AI models: the place where they have been developed; the owners, partners, or supporters; and their interests, goals, and impositions. “Research has already identified how these image models internalize toxic stereotypes (Birnhane 2021) and reproduce forms of gendered and ethnic bias (Luccioni 2023), to name just two issues” (Munn et al., 2023, p. 2).There are also differences between the different models that currently dominate the market. Although Stable Diffusion seems to be the most open due to its origin, when working with images with this model, biases appear even more quickly than in other models. “In this framing, Stable Diffusion becomes an internet-based tool, which can be used and abused by “the people,” rather than a corporate product, where responsibility is clear, quality must be ensured, and toxicity must be mitigated” (Munn et al., 2023, p. 5).To unmaking the data, it is important to ask ourselves about the source and interests for the extraction of the data used. According to the description of their project “Creating an Ad Library Political Observatory”, “This project aims to explore diverse approaches to analyze and visualize the data from Meta’s ad library, which includes Instagram, Facebook, and other Meta products, using LLMs. The ultimate goal is to enhance the Ad Library Political Observatory, a tool we are developing to monitor Meta’s ad business.” That is to say, the images were taken from political advertising on the social network Facebook, as part of an observation process that seeks to make evident the investments in advertising around politics. These are prepared images in terms of what is seen in the background of the image, the position and posture of the characters, the visible objects. In general, we could say that we are dealing with staged images. This is important since the initial information that describes the AI is in itself a representation, a visual creation.
LINK
Receiving the first “Rijbewijs” is always an exciting moment for any teenager, but, this also comes with considerable risks. In the Netherlands, the fatality rate of young novice drivers is five times higher than that of drivers between the ages of 30 and 59 years. These risks are mainly because of age-related factors and lack of experience which manifests in inadequate higher-order skills required for hazard perception and successful interventions to react to risks on the road. Although risk assessment and driving attitude is included in the drivers’ training and examination process, the accident statistics show that it only has limited influence on the development factors such as attitudes, motivations, lifestyles, self-assessment and risk acceptance that play a significant role in post-licensing driving. This negatively impacts traffic safety. “How could novice drivers receive critical feedback on their driving behaviour and traffic safety? ” is, therefore, an important question. Due to major advancements in domains such as ICT, sensors, big data, and Artificial Intelligence (AI), in-vehicle data is being extensively used for monitoring driver behaviour, driving style identification and driver modelling. However, use of such techniques in pre-license driver training and assessment has not been extensively explored. EIDETIC aims at developing a novel approach by fusing multiple data sources such as in-vehicle sensors/data (to trace the vehicle trajectory), eye-tracking glasses (to monitor viewing behaviour) and cameras (to monitor the surroundings) for providing quantifiable and understandable feedback to novice drivers. Furthermore, this new knowledge could also support driving instructors and examiners in ensuring safe drivers. This project will also generate necessary knowledge that would serve as a foundation for facilitating the transition to the training and assessment for drivers of automated vehicles.
Veel mkb ondernemers maken gebruik van een financieel adviseur bij belangrijke financiële beslissingen. Momenteel is er echter weinig inzicht in achterliggende psychosociale factoren die het financiële advies van financieel adviseurs beïnvloeden. Op basis van eerder onderzoek (Kahneman, 2013) blijkt dat mensen zich laten leiden, als het gaat om financiële beslissingen, door een keur aan psychologische ‘denkvalkuilen’ (‘heuristics’ en biases’). Verondersteld wordt dat rol van de adviseur zou moeten zijn om specialistisch en objectief advies te geven, echter de denkvalkuilen en vooroordelen van de cliënt ‘klinken ook door’ in het advies van hun financieel adviseurs. Zo worden irrationele vormen van risicoperceptie en irreële verwachtingen van cliënten ten aanzien van de toekomst meegenomen in de adviezen van adviseurs; “de klant is immers koning.” Eerder onderzoek suggereert dat financieel adviseurs de verwachtingen en ideeën van hun cliënten alleen maar bevestigen en niet, indien nodig, bij irreële of ‘foute’ verwachtingen of aannames, corrigeren. De beweegreden van de adviseur hiervoor zijn dat meegaan met de ideeën van cliënten resulteert in minder verantwoordelijkheid voor de adviseur bij negatieve uitkomsten of resultaten; “dit is immers wat de cliënt zelf wil.” Terwijl veel input en advies het tegenovergestelde bewerkstelligt, “de cliënt vaart blind op de adviezen van zijn adviseur”, wat de cliënt-adviseur relatie in gevaar brengt bij negatieve resultaten. Het gevolg is een suboptimaal en in sommige gevallen slecht financieel advies. Dit onderzoek heeft tot doel de voorwaarden van een opener, beter afgewogen en objectiever financieel advies te ontdekken voor mkb-ondernemers, De centrale vraag is: Hoe komt een financieel adviseur tot een financieel advies voor mkb bedrijven? Deze vraag zal beantwoord worden de een survey bij de twee grootste franchise financiële advieskantoren (+/- 2500 leden), wat een representatieve steekproef is voor de financieel adviseurs in het mkb.