This article investigates the aesthetic advice posted by temporary employment agencies on their websites. These agencies organise a substantial part of the Dutch labour market and they are known to apply exclusionary practices in their strategies of recruitment and selection in order to meet employers’ preferences. This article sheds light on (1) the content of the advice; (2) how it legitimises the importance of aesthetics for finding work; and (3) in what ways the advice serves the purposes of the agencies. An in-depth content analysis illustrates how the advice has the potential to reproduce exclusions, thus helping employment agencies adhere to employers’ exclusionary requests. Creating online content that generates traffic to the websites in this case causes a circular logic in which the importance of aesthetics is self-reinforcing. The study illustrates that the seemingly neutral and empty advice posted on websites may enforce exclusions in the temporary work labour market.
DOCUMENT
In this paper, we explore the design of web-based advice robots to enhance users' confidence in acting upon the provided advice. Drawing from research on algorithm acceptance and explainable AI, we hypothesise four design principles that may encourage interactivity and exploration, thus fostering users' confidence to act. Through a value-oriented prototype experiment and valueoriented semi-structured interviews, we tested these principles, confirming three of them and identifying an additional principle. The four resulting principles: (1) put context questions and resulting advice on one page and allow live, iterative exploration, (2) use action or change oriented questions to adjust the input parameters, (3) actively offer alternative scenarios based on counterfactuals, and (4) show all options instead of only the recommended one(s), appear to contribute to the values of agency and trust. Our study integrates the Design Science Research approach with a Value Sensitive Design approach.
DOCUMENT
In this paper, we explore the design of web-based advice robots to enhance users' confidence in acting upon the provided advice. Drawing from research on algorithm acceptance and explainable AI, we hypothesise four design principles that may encourage interactivity and exploration, thus fostering users' confidence to act. Through a value-oriented prototype experiment and value-oriented semi-structured interviews, we tested these principles, confirming three of them and identifying an additional principle. The four resulting principles: (1) put context questions and resulting advice on one page and allow live, iterative exploration, (2) use action or change oriented questions to adjust the input parameters, (3) actively offer alternative scenarios based on counterfactuals, and (4) show all options instead of only the recommended one(s), appear to contribute to the values of agency and trust. Our study integrates the Design Science Research approach with a Value Sensitive Design approach.
MULTIFILE
This project researches risk perceptions about data, technology, and digital transformation in society and how to build trust between organisations and users to ensure sustainable data ecologies. The aim is to understand the user role in a tech-driven environment and her perception of the resulting relationships with organisations that offer data-driven services/products. The discourse on digital transformation is productive but does not truly address the user’s attitudes and awareness (Kitchin 2014). Companies are not aware enough of the potential accidents and resulting loss of trust that undermine data ecologies and, consequently, forfeit their beneficial potential. Facebook’s Cambridge Analytica-situation, for instance, led to 42% of US adults deleting their accounts and the company losing billions. Social, political, and economic interactions are increasingly digitalised, which comes with hands-on benefits but also challenges privacy, individual well-being and a fair society. User awareness of organisational practices is of heightened importance, as vulnerabilities for users equal vulnerabilities for data ecologies. Without transparency and a new “social contract” for a digital society, problems are inevitable. Recurring scandals about data leaks and biased algorithms are just two examples that illustrate the urgency of this research. Properly informing users about an organisation’s data policies makes a crucial difference (Accenture 2018) and for them to develop sustainable business models, organisations need to understand what users expect and how to communicate with them. This research project tackles this issue head-on. First, a deeper understanding of users’ risk perception is needed to formulate concrete policy recommendations aiming to educate and build trust. Second, insights about users’ perceptions will inform guidelines. Through empirical research on framing in the data discourse, user types, and trends in organisational practice, the project develops concrete advice - for users and practitioners alike - on building sustainable relationships in a resilient digital society.