From diagnosis to patient scheduling, AI is increasingly being considered across different clinical applications. Despite increasingly powerful clinical AI, uptake into actual clinical workflows remains limited. One of the major challenges is developing appropriate trust with clinicians. In this paper, we investigate trust in clinical AI in a wider perspective beyond user interactions with the AI. We offer several points in the clinical AI development, usage, and monitoring process that can have a significant impact on trust. We argue that the calibration of trust in AI should go beyond explainable AI and focus on the entire process of clinical AI deployment. We illustrate our argument with case studies from practitioners implementing clinical AI in practice to show how trust can be affected by different stages in the deployment cycle.
This paper reveals how the automatising of protocols ignited a public conflict between Dutch banks and their Small and Medium-sized Enterprise (SME) clients in the years after the Global Financial Crisis. The bank’s “infirmary departments” for Financial Restructuring and Recovery (FR&R) were accused of (mal)treating SMEs. The conflict resulted in no formal regulatory or legal change despite public support. Instead, the banks created self-regulation to improve communication with SMEs, leading to shifts in governing FR&R for SMEs. This way, the banks mitigated significant negative symptoms of automation and solved the conflict with the SMEs while keeping FR&R and ongoing automation intact. The research uses an interdisciplinary analytical framework to understand national financial conflicts in a digitalised (business) world. It contributes to the theory of institutionalising values in discursive contests between action fields. The paper highlights the material and causes of normative conflicts of interest among critical actors in established public-private networks through discourse analysis and process tracing.