Artificial Intelligence (AI) is increasingly shaping the way we work, live, and interact, leading to significant developments across various sectors of industry, including media, finance, business services, retail and education. In recent years, numerous high-level principles and guidelines for ‘responsible’ or ‘ethical’ AI have been formulated. However, these theoretical efforts often fall short when it comes to addressing the practical challenges of implementing AI in real-world contexts: Responsible Applied AI. The one-day workshop on Responsible Applied Artificial InTelligence (RAAIT) at HHAI 2024: Hybrid Human AI Systems for the Social Good in Malmö, Sweden, brought together researchers studying various dimensions of Responsible AI in practice.This was the second RAAIT workshop, following the first edition at the 2023 European Conference on Artificial Intelligence (ECAI) in Krakow, Poland.
MULTIFILE
Concerns have been raised over the increased prominence ofgenerative AI in art. Some fear that generative models could replace theviability for humans to create art and oppose developers training generative models on media without the artist's permission. Proponents of AI art point to the potential increase in accessibility. Is there an approach to address the concerns artists raise while still utilizing the potential these models bring? Current models often aim for autonomous music generation. This, however, makes the model a black box that users can't interact with. By utilizing an AI pipeline combining symbolic music generation and a proposed sample creation system trained on Creative Commons data, a musical looping application has been created to provide non-expert music users with a way to start making their own music. The first results show that it assists users in creating musical loops and shows promise for future research into human-AI interaction in art.
The user’s experience with a recommender system is significantly shaped by the dynamics of user-algorithm interactions. These interactions are often evaluated using interaction qualities, such as controllability, trust, and autonomy, to gauge their impact. As part of our effort to systematically categorize these evaluations, we explored the suitability of the interaction qualities framework as proposed by Lenz, Dieffenbach and Hassenzahl. During this examination, we uncovered four challenges within the framework itself, and an additional external challenge. In studies examining the interaction between user control options and interaction qualities, interdependencies between concepts, inconsistent terminology, and the entity perspective (is it a user’s trust or a system’s trustworthiness) often hinder a systematic inventory of the findings. Additionally, our discussion underscored the crucial role of the decision context in evaluating the relation of algorithmic affordances and interaction qualities. We propose dimensions of decision contexts (such as ‘reversibility of the decision’, or ‘time pressure’). They could aid in establishing a systematic three-way relationship between context attributes, attributes of user control mechanisms, and experiential goals, and as such they warrant further research. In sum, while the interaction qualities framework serves as a foundational structure for organizing research on evaluating the impact of algorithmic affordances, challenges related to interdependencies and context-specific influences remain. These challenges necessitate further investigation and subsequent refinement and expansion of the framework.
LINK
Entangled Machines is a project by Mariana Fernández Mora that interrogates the colonial and extractive legacies underpinning artificial intelligence (AI). By introducing slowness and digital kinship as critical frameworks, the project reconceptualises AI as embedded within intricate social and ecological networks, thereby contesting dominant narratives of efficiency and optimisation. Through participatory, practice-based methodologies such as the Material Playground, the project integrates feminist and non-Western epistemologies to articulate alternative models for ethical, sustainable, and equitable AI practices. Over a four-year period, Entangled Machines develops theory, engages diverse communities, and produces artistic outputs to reimagine human-AI interactions. In collaboration with partners including ARIAS Amsterdam, Archival Consciousness, and the Sandberg Institute, the research seeks to foster decolonial and interdisciplinary approaches to AI. Its culmination will be an “Anarchive” – a curated assemblage of artistic, theoretical, and archival outputs – that serves as a resource for rethinking AI’s socio-political and ecological impacts.