The user experience of our daily interactions is increasingly shaped with the aid of AI, mostly as the output of recommendation engines. However, it is less common to present users with possibilities to navigate or adapt such output. In this paper we argue that adding such algorithmic controls can be a potent strategy to create explainable AI and to aid users in building adequate mental models of the system. We describe our efforts to create a pattern library for algorithmic controls: the algorithmic affordances pattern library. The library can aid in bridging research efforts to explore and evaluate algorithmic controls and emerging practices in commercial applications, therewith scaffolding a more evidence-based adoption of algorithmic controls in industry. A first version of the library suggested four distinct categories of algorithmic controls: feeding the algorithm, tuning algorithmic parameters, activating recommendation contexts, and navigating the recommendation space. In this paper we discuss these and reflect on how each of them could aid explainability. Based on this reflection, we unfold a sketch for a future research agenda. The paper also serves as an open invitation to the XAI community to strengthen our approach with things we missed so far.
MULTIFILE
Recommenders play a significant role in our daily lives, making decisions for users on a regular basis. Their widespread adoption necessitates a thorough examination of how users interact with recommenders and the algorithms that drive them. An important form of interaction in these systems are algorithmic affordances: means that provide users with perceptible control over the algorithm by, for instance, providing context (‘find a movie for this profile’), weighing criteria (‘most important is the main actor’), or evaluating results (‘loved this movie’). The assumption is that these algorithmic affordances impact interaction qualities such as transparency, trust, autonomy, and serendipity, and as a result, they impact the user experience. Currently, the precise nature of the relation between algorithmic affordances, their specific implementations in the interface, interaction qualities, and user experience remains unclear. Subjects that will be discussed during the workshop, therefore, include but are not limited to the impact of algorithmic affordances and their implementations on interaction qualities, balances between cognitive overload and transparency in recommender interfaces containing algorithmic affordances; and reasons why research into these types of interfaces sometimes fails to cross the research-practice gap and are not landing in the design practice. As a potential solution the workshop committee proposes a library of examples of algorithmic affordances design patterns and their implementations in recommender interfaces enriched with academic research concerning their impact. The final part of the workshop will be dedicated to formulating guiding principles for such a library.
LINK
Algorithmic affordances are defined as user interaction mechanisms that allow users tangible control over AI algorithms, such as recommender systems. Designing such algorithmic affordances, including assessing their impact, is not straightforward and practitioners state that they lack resources to design adequately for interfaces of AI systems. This could be amended by creating a comprehensive pattern library of algorithmic affordances. This library should provide easy access to patterns, supported by live examples and research on their experiential impact and limitations of use. The Algorithmic Affordances in Recommender Interfaces workshop aimed to address key challenges related to building such a pattern library, including pattern identification features, a framework for systematic impact evaluation, and understanding the interaction between algorithmic affordances and their context of use, especially in education or with users with a low algorithmic literacy. Preliminary solutions were proposed for these challenges.
LINK
The research proposal aims to improve the design and verification process for coastal protection works. With global sea levels rising, the Netherlands, in particular, faces the challenge of protecting its coastline from potential flooding. Four strategies for coastal protection are recognized: protection-closed (dikes, dams, dunes), protection-open (storm surge barriers), advancing the coastline (beach suppletion, reclamation), and accommodation through "living with water" concepts. The construction process of coastal protection works involves collaboration between the client and contractors. Different roles, such as project management, project control, stakeholder management, technical management, and contract management, work together to ensure the project's success. The design and verification process is crucial in coastal protection projects. The contract may include functional requirements or detailed design specifications. Design drawings with tolerances are created before construction begins. During construction and final verification, the design is measured using survey data. The accuracy of the measurement techniques used can impact the construction process and may lead to contractual issues if not properly planned. The problem addressed in the research proposal is the lack of a comprehensive and consistent process for defining and verifying design specifications in coastal protection projects. Existing documents focus on specific aspects of the process but do not provide a holistic approach. The research aims to improve the definition and verification of design specifications through a systematic review of contractual parameters and survey methods. It seeks to reduce potential claims, improve safety, enhance the competitiveness of maritime construction companies, and decrease time spent on contractual discussions. The research will have several outcomes, including a body of knowledge describing existing and best practices, a set of best practices and recommendations for verifying specific design parameters, and supporting documents such as algorithms for verification.
This project researches risk perceptions about data, technology, and digital transformation in society and how to build trust between organisations and users to ensure sustainable data ecologies. The aim is to understand the user role in a tech-driven environment and her perception of the resulting relationships with organisations that offer data-driven services/products. The discourse on digital transformation is productive but does not truly address the user’s attitudes and awareness (Kitchin 2014). Companies are not aware enough of the potential accidents and resulting loss of trust that undermine data ecologies and, consequently, forfeit their beneficial potential. Facebook’s Cambridge Analytica-situation, for instance, led to 42% of US adults deleting their accounts and the company losing billions. Social, political, and economic interactions are increasingly digitalised, which comes with hands-on benefits but also challenges privacy, individual well-being and a fair society. User awareness of organisational practices is of heightened importance, as vulnerabilities for users equal vulnerabilities for data ecologies. Without transparency and a new “social contract” for a digital society, problems are inevitable. Recurring scandals about data leaks and biased algorithms are just two examples that illustrate the urgency of this research. Properly informing users about an organisation’s data policies makes a crucial difference (Accenture 2018) and for them to develop sustainable business models, organisations need to understand what users expect and how to communicate with them. This research project tackles this issue head-on. First, a deeper understanding of users’ risk perception is needed to formulate concrete policy recommendations aiming to educate and build trust. Second, insights about users’ perceptions will inform guidelines. Through empirical research on framing in the data discourse, user types, and trends in organisational practice, the project develops concrete advice - for users and practitioners alike - on building sustainable relationships in a resilient digital society.