Questions of ethics lie at the heart of government responses to Covid-19, professional reactions and citizens’ behaviour. Such questions include: Do we value health or the economy? Who gets the protective equipment, ventilators or food vouchers? Is combatting loneliness worth the risk of spreading or contracting the virus? During May 2020, a group of academics in partnership with the International Federation of Social Workers (IFSW) launched a qualitative survey, asking for details of the ethical challenges faced by social workers during Covid-19. We identified six main themes: Creating and maintaining trusting, honest and empathic relationships via phone or internet with due regard to privacy and confidentiality, or in person with protective equipment. Prioritising service user needs and demands, which are greater and different due to the pandemic, when resources are stretched/unavailable and full assessments often impossible. Balancing service user rights, needs and risks against personal risk to social workers and others, in order to provide services as well as possible. Deciding whether to follow national and organisational policies, procedures or guidance (existing or new) or to use professional discretion in circumstances where the policies seem inappropriate, confused or lacking. Acknowledging and handling emotions, fatigue and the need for self-care, when working in unsafe and stressful circumstances. Using the lessons learned from working during the pandemic to rethink social work in the future.
LINK
One aspect of a responsible application of Artificial Intelligence (AI) is ensuring that the operation and outputs of an AI system are understandable for non-technical users, who need to consider its recommendations in their decision making. The importance of explainable AI (XAI) is widely acknowledged; however, its practical implementation is not straightforward. In particular, it is still unclear what the requirements are of non-technical users from explanations, i.e. what makes an explanation meaningful. In this paper, we synthesize insights on meaningful explanations from a literature study and two use cases in the financial sector. We identified 30 components of meaningfulness in XAI literature. In addition, we report three themes associated with explanation needs that were central to the users in our use cases, but are not prominently described in literature: actionability, coherent narratives and context. Our results highlight the importance of narrowing the gap between theoretical and applied responsible AI.
MULTIFILE
Artificial Intelligence (AI) is increasingly shaping the way we work, live, and interact, leading to significant developments across various sectors of industry, including media, finance, business services, retail and education. In recent years, numerous high-level principles and guidelines for ‘responsible’ or ‘ethical’ AI have been formulated. However, these theoretical efforts often fall short when it comes to addressing the practical challenges of implementing AI in real-world contexts: Responsible Applied AI. The one-day workshop on Responsible Applied Artificial InTelligence (RAAIT) at HHAI 2024: Hybrid Human AI Systems for the Social Good in Malmö, Sweden, brought together researchers studying various dimensions of Responsible AI in practice.This was the second RAAIT workshop, following the first edition at the 2023 European Conference on Artificial Intelligence (ECAI) in Krakow, Poland.
MULTIFILE