In this paper, we explore the design of web-based advice robots to enhance users' confidence in acting upon the provided advice. Drawing from research on algorithm acceptance and explainable AI, we hypothesise four design principles that may encourage interactivity and exploration, thus fostering users' confidence to act. Through a value-oriented prototype experiment and value-oriented semi-structured interviews, we tested these principles, confirming three of them and identifying an additional principle. The four resulting principles: (1) put context questions and resulting advice on one page and allow live, iterative exploration, (2) use action or change oriented questions to adjust the input parameters, (3) actively offer alternative scenarios based on counterfactuals, and (4) show all options instead of only the recommended one(s), appear to contribute to the values of agency and trust. Our study integrates the Design Science Research approach with a Value Sensitive Design approach.
MULTIFILE
This exploratory study investigates the rationale behind categorizing algorithmic controls, or algorithmic affordances, in the graphical user interfaces (GUIs) of recommender systems. Seven professionals from industry and academia took part in an open card sorting activity to analyze 45 cards with examples of algorithmic affordances in recommender systems’ GUIs. Their objective was to identify potential design patterns including features on which to base these patterns. Analyzing the group discussions revealed distinct thought processes and defining factors for design patterns that were shared by academic and industry partners. While the discussions were promising, they also demonstrated a varying degree of alignment between industry and academia when it came to labelling the identified categories. Since this workshop is part of the preparation for creating a design pattern library of algorithmic affordances, and since the library aims to be useful for both industry and research partners, further research into design patterns of algorithmic affordances, particularly in terms of labelling and description, is required in order to establish categories that resonate with all relevant parties
LINK
Social networks and news outlets use recommender systems to distribute information and suggest news to their users. These algorithms are an attractive solution to deal with the massive amount of content on the web [6]. However, some organisations prioritise retention and maximisation of the number of access, which can be incompatible with values like the diversity of content and transparency. In recent years critics have warned of the dangers of algorithmic curation. The term filter bubbles, coined by the internet activist Eli Pariser [1], describes the outcome of pre-selected personalisation, where users are trapped in a bubble of similar contents. Pariser warns that it is not the user but the algorithm that curates and selects interesting topics to watch or read. Still, there is disagreement about the consequences for individuals and society. Research on the existence of filter bubbles is inconclusive. Fletcher in [5], claims that the term filter bubbles is an oversimplification of a much more complex system involving cognitive processes and social and technological interactions. And most of the empirical studies indicate that algorithmic recommendations have not locked large segments of the audience into bubbles [3] [6]. We built an agent-based simulation tool to study the dynamic and complex interplay between individual choices and social and technological interaction. The model includes different recommendation algorithms and a range of cognitive filters that can simulate different social network dynamics. The cognitive filters are based on the triple-filter bubble model [2]. The tool can be used to understand under which circumstances algorithmic filtering and social network dynamics affect users' innate opinions and which interventions on recommender systems can mitigate adverse side effects like the presence of filter bubbles. The resulting tool is an open-source interactive web interface, allowing the simulation with different parameters such as users' characteristics, social networks and recommender system settings (see Fig. 1). The ABM model, implemented in Python Mesa [4], allows users to visualise, compare and analyse the consequence of combining various factors. Experiment results are similar to the ones published in the Triple Filter Bubble paper [2]. The novelty is the option to use a real collaborative-filter recommendation system and a new metric to measure the distance between users' innate and final opinions. We observed that slight modifications in the recommendation system, exposing items within the boundaries of users' latitude of acceptance, could increase content diversity.References 1. Pariser, E.: The filter bubble: What the internet is hiding from you. Penguin, New York, NY (2011) 2. Geschke, D., Lorenz, J., Holtz, P.: The triple-filter bubble: Using agent-based modelling to test a meta-theoretical framework for the emergence of filter bubbles and echo chambers. British Journal of Social Psychology (2019), 58, 129–149 3. Möller, J., Trilling, D., Helberger, N. , and van Es, B.: Do Not Blame It on the Algorithm: An Empirical Assessment of Multiple Recommender Systems and Their Impact on Content Diversity. Information, Communication and Society 21, no. 7 (2018): 959–77 4. Mesa: Agent-based modeling in Python, https://mesa.readthedocs.io/. Last accessed 2 Sep 2022 5. Fletcher, R.: The truth behind filter bubbles: Bursting some myths. Digital News Report - Reuters Institute (2020). https://reutersinstitute.politics.ox.ac.uk/news/truth-behind-filter-bubblesbursting-some-myths. Last accessed 2 Sep 2022 6. Haim, M., Graefe, A, Brosius, H: Burst of the Filter Bubble?: Effects of Personalization on the Diversity of Google News. Digital Journalism 6, no. 3 (2018): 330–43.
MULTIFILE
The research proposal aims to improve the design and verification process for coastal protection works. With global sea levels rising, the Netherlands, in particular, faces the challenge of protecting its coastline from potential flooding. Four strategies for coastal protection are recognized: protection-closed (dikes, dams, dunes), protection-open (storm surge barriers), advancing the coastline (beach suppletion, reclamation), and accommodation through "living with water" concepts. The construction process of coastal protection works involves collaboration between the client and contractors. Different roles, such as project management, project control, stakeholder management, technical management, and contract management, work together to ensure the project's success. The design and verification process is crucial in coastal protection projects. The contract may include functional requirements or detailed design specifications. Design drawings with tolerances are created before construction begins. During construction and final verification, the design is measured using survey data. The accuracy of the measurement techniques used can impact the construction process and may lead to contractual issues if not properly planned. The problem addressed in the research proposal is the lack of a comprehensive and consistent process for defining and verifying design specifications in coastal protection projects. Existing documents focus on specific aspects of the process but do not provide a holistic approach. The research aims to improve the definition and verification of design specifications through a systematic review of contractual parameters and survey methods. It seeks to reduce potential claims, improve safety, enhance the competitiveness of maritime construction companies, and decrease time spent on contractual discussions. The research will have several outcomes, including a body of knowledge describing existing and best practices, a set of best practices and recommendations for verifying specific design parameters, and supporting documents such as algorithms for verification.
This project researches risk perceptions about data, technology, and digital transformation in society and how to build trust between organisations and users to ensure sustainable data ecologies. The aim is to understand the user role in a tech-driven environment and her perception of the resulting relationships with organisations that offer data-driven services/products. The discourse on digital transformation is productive but does not truly address the user’s attitudes and awareness (Kitchin 2014). Companies are not aware enough of the potential accidents and resulting loss of trust that undermine data ecologies and, consequently, forfeit their beneficial potential. Facebook’s Cambridge Analytica-situation, for instance, led to 42% of US adults deleting their accounts and the company losing billions. Social, political, and economic interactions are increasingly digitalised, which comes with hands-on benefits but also challenges privacy, individual well-being and a fair society. User awareness of organisational practices is of heightened importance, as vulnerabilities for users equal vulnerabilities for data ecologies. Without transparency and a new “social contract” for a digital society, problems are inevitable. Recurring scandals about data leaks and biased algorithms are just two examples that illustrate the urgency of this research. Properly informing users about an organisation’s data policies makes a crucial difference (Accenture 2018) and for them to develop sustainable business models, organisations need to understand what users expect and how to communicate with them. This research project tackles this issue head-on. First, a deeper understanding of users’ risk perception is needed to formulate concrete policy recommendations aiming to educate and build trust. Second, insights about users’ perceptions will inform guidelines. Through empirical research on framing in the data discourse, user types, and trends in organisational practice, the project develops concrete advice - for users and practitioners alike - on building sustainable relationships in a resilient digital society.