This book is a journey through coexisting, emerging or speculated about, types of digital value transfer infrastructures. Using digital value transfer infrastructures as a central case study, this thesis is concerned with unpacking the negotiation processes that shape the governance, design and political purposes of digital infrastructures that are closely linked to the public interest and state sovereignty. In particular, the papers that are assembled in this manuscript identify and inspect three main socio-technical developments occurring in the domain of value transfer technologies: a) the privatization and platformization of digital payment infrastructures; b) the spread of blockchain-based digital value transfer infrastructures; c) the construction of digital value transfer infrastructures as public utilities, from the part of public institutions or organizations. Concerned with the relationship between law, discourse and technological development, the thesis explores four transversal issues that strike differences and peculiarities of these three scenarios: i) privacy; ii) the synergy and mutual influence of legal change and technological development in the construction of digital infrastructures; iii) the role of socio-technical imaginaries in policy-making concerned with digital infrastructures; iv) the geography and scale of digital infrastructures. The analyses lead to the argument that, in the co-development of legal systems and digital infrastructures that are core to public life, conflicts are productive. Negotiations, ruptures and exceptions are constitutive of the unending process of mutual reinforcement, and mutual containment, in which a plurality of agencies – expressed through legal institutions, symbolic systems, as well as information and media structures – are entangled.
MULTIFILE
Visual research has historically been productive in foregrounding marginalised voices through photovoice as alternative to the written and oral forms of participation that dominate public participation. Photovoice projects have however been slow to leverage digital and spatial technologies for reworking the method in ways that enable geospatial analysis and collect structured metadata that can be used in workshops to bring different groups together around unpacking urban problems. The Urban Belonging project contributes to this by testing a new application, UB App, in an empirical study of how participants from seven marginalised communities in Copenhagen experience the city, including ethnic minorities, deaf, homeless, physically disabled, mentally vulnerable, LGBTQ+, and expats in Denmark. From a dataset of 1459 geolocated photos, co-interpreted by participants, the project first unpacks community-specific patterns in how the city creates experiences of belonging for different groups. Second, it examines how participants experience places differently, producing multilayered representations of conflicting viewpoints on belonging. The project hereby brings GIS and digital methods capabilities into photovoice and opens new epistemological flexibilities in the method, making it possible to move between; qualitative and quantitative analysis; bottom-up and top-down lenses on data; and demographic and post-demographic ways or organising participation.
We examine the ideological differences in the debate surrounding large language models (LLMs) and AI regulation, focusing on the contrasting positions of the Future of Life Institute (FLI) and the Distributed AI Research (DAIR) institute. The study employs a humanistic HCI methodology, applying narrative theory to HCI-related topics and analyzing the political differences between FLI and DAIR, as they are brought to bear on research on LLMs. Two conceptual lenses, “existential risk” and “ongoing harm,” are applied to reveal differing perspectives on AI's societal and cultural significance. Adopting a longtermist perspective, FLI prioritizes preventing existential risks, whereas DAIR emphasizes addressing ongoing harm and human rights violations. The analysis further discusses these organizations’ stances on risk priorities, AI regulation, and attribution of responsibility, ultimately revealing the diverse ideological underpinnings of the AI and LLMs debate. Our analysis highlights the need for more studies of longtermism's impact on vulnerable populations, and we urge HCI researchers to consider the subtle yet significant differences in the discourse on LLMs.