Albeit the widespread application of recommender systems (RecSys) in our daily lives, rather limited research has been done on quantifying unfairness and biases present in such systems. Prior work largely focuses on determining whether a RecSys is discriminating or not but does not compute the amount of bias present in these systems. Biased recommendations may lead to decisions that can potentially have adverse effects on individuals, sensitive user groups, and society. Hence, it is important to quantify these biases for fair and safe commercial applications of these systems. This paper focuses on quantifying popularity bias that stems directly from the output of RecSys models, leading to over recommendation of popular items that are likely to be misaligned with user preferences. Four metrics to quantify popularity bias in RescSys over time in dynamic setting across different sensitive user groups have been proposed. These metrics have been demonstrated for four collaborative filteri ng based RecSys algorithms trained on two commonly used benchmark datasets in the literature. Results obtained show that the metrics proposed provide a comprehensive understanding of growing disparities in treatment between sensitive groups over time when used conjointly.
DOCUMENT
The user experience of our daily interactions is increasingly shaped with the aid of AI, mostly as the output of recommendation engines. However, it is less common to present users with possibilities to navigate or adapt such output. In this paper we argue that adding such algorithmic controls can be a potent strategy to create explainable AI and to aid users in building adequate mental models of the system. We describe our efforts to create a pattern library for algorithmic controls: the algorithmic affordances pattern library. The library can aid in bridging research efforts to explore and evaluate algorithmic controls and emerging practices in commercial applications, therewith scaffolding a more evidence-based adoption of algorithmic controls in industry. A first version of the library suggested four distinct categories of algorithmic controls: feeding the algorithm, tuning algorithmic parameters, activating recommendation contexts, and navigating the recommendation space. In this paper we discuss these and reflect on how each of them could aid explainability. Based on this reflection, we unfold a sketch for a future research agenda. The paper also serves as an open invitation to the XAI community to strengthen our approach with things we missed so far.
MULTIFILE
This exploratory study investigates the rationale behind categorizing algorithmic controls, or algorithmic affordances, in the graphical user interfaces (GUIs) of recommender systems. Seven professionals from industry and academia took part in an open card sorting activity to analyze 45 cards with examples of algorithmic affordances in recommender systems’ GUIs. Their objective was to identify potential design patterns including features on which to base these patterns. Analyzing the group discussions revealed distinct thought processes and defining factors for design patterns that were shared by academic and industry partners. While the discussions were promising, they also demonstrated a varying degree of alignment between industry and academia when it came to labelling the identified categories. Since this workshop is part of the preparation for creating a design pattern library of algorithmic affordances, and since the library aims to be useful for both industry and research partners, further research into design patterns of algorithmic affordances, particularly in terms of labelling and description, is required in order to establish categories that resonate with all relevant parties
LINK