While the concept of Responsible Innovation is increasingly common among researchers and policy makers, it is still unknown what it means in a business context. This study aims to identify which aspects of Responsible Innovation are conceptually similar and dissimilar from social- and sustainable innovation. Our conceptual analysis is based on literature reviews of responsible-, social-, and sustainable innovation. The insights obtained are used for conceptualising Responsible Innovation in a business context. The main conclusion is that Responsible Innovation differs from social- and sustainable innovation as it: (1) also considers possible detrimental implications of innovation, (2) includes a mechanism for responding to uncertainties associated with innovation and (3) achieves a democratic governance of the innovation. However, achieving the latter will not be realistic in a business context. The results of this study are relevant for researchers, managers and policy makers who are interested in responsible innovation in the business context.
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
Whitepaper: The use of AI is on the rise in the financial sector. Utilizing machine learning algorithms to make decisions and predictions based on the available data can be highly valuable. AI offers benefits to both financial service providers and its customers by improving service and reducing costs. Examples of AI use cases in the financial sector are: identity verification in client onboarding, transaction data analysis, fraud detection in claims management, anti-money laundering monitoring, price differentiation in car insurance, automated analysis of legal documents, and the processing of loan applications.
In dit project ontwikkelt het HvA-lectoraat Responsible IT in co-creatie met Digital Agency Fonk een werkend prototype van een innovatieve educatieve AI-applicatie, die de taalvaardigheid van kinderen en ouders vergroot. Onderdeel van deze applicatie is een functionaliteit voor taalvereenvoudiging op basis van AI. Dit software-onderdeel analyseert tijdens het lezen het AVI niveau van de lezers en past het verhaal hier automatisch op aan. Met audio- en spraakanalyse worden fouten in o.a. uitspraak, grammatica en woordbegrip gedetecteerd, en het niveau van de tekst automatisch verhoogd of te verlaagd. Door de moeilijkheidsgraad van de tekst langzaam te verhogen wordt de leesvaardigheid verbeterd.
De maatschappelijke discussies over de invloed van AI op ons leven tieren welig. De terugkerende vraag is of AI-toepassingen – en dan vooral recommendersystemen – een dreiging of een redding zijn. De impact van het kiezen van een film voor vanavond, met behulp van Netflix' recommendersysteem, is nog beperkt. De impact van datingsites, navigatiesystemen en sociale media – allemaal systemen die met algoritmes informatie filteren of keuzes aanraden – is al groter. De impact van recommendersystemen in bijvoorbeeld de zorg, bij werving en selectie, fraudedetectie, en beoordelingen van hypotheekaanvragen is enorm, zowel op individueel als op maatschappelijk niveau. Het is daarom urgent dat juist recommendersystemen volgens de waarden van Responsible AI ontworpen worden: veilig, eerlijk, betrouwbaar, inclusief, transparant en controleerbaar.Om op een goede manier Responsible AI te ontwerpen moeten technische, contextuele én interactievraagstukken worden opgelost. Op het technische en maatschappelijke niveau is al veel vooruitgang geboekt, respectievelijk door onderzoek naar algoritmen die waarden als inclusiviteit in hun berekening meenemen, en door de ontwikkeling van wettelijke kaders. Over implementatie op interactieniveau bestaat daarentegen nog weinig concrete kennis. Bekend is dat gebruikers die interactiemogelijkheden hebben om een algoritme bij te sturen of aan te vullen, meer transparantie en betrouwbaarheid ervaren. Echter, slecht ontworpen interactiemogelijkheden, of een mismatch tussen interactie en context kosten juist tijd, veroorzaken mentale overbelasting, frustratie, en een gevoel van incompetentie. Ze verhullen eerder dan dat ze tot transparantie leiden.Het ontbreekt ontwerpers van interfaces (UX/UI designers) aan systematische concrete kennis over deze interactiemogelijkheden, hun toepasbaarheid, en de ethische grenzen. Dat beperkt hun mogelijkheid om op interactieniveau aan Responsible AI bij te dragen. Ze willen daarom graag een pattern library van interactiemogelijkheden, geannoteerd met onderzoek over de werking en inzetbaarheid. Dit bestaat nu niet en met dit project willen we een substantiële bijdrage leveren aan de ontwikkeling ervan.
Denim Democracy from the Alliance for Responsible Denim (ARD) is an interactive exhibition that celebrates the journey and learning of ARD members, educates visitors about sustainable denim and highlights how companies collaborate together to achieve results. Through sight, sound and tactile sensations, the visitor experiences and fully engages sustainable denim production. The exhibition launches in October 2018 in Amsterdam and travels to key venues and locations in the Netherlands until April 2019. As consumers, we love denim but the denim industry, like other sub-sectors in the textile, apparel and footwear industries, faces many complex sustainability challenges and has been criticized for its polluting and hazardous production practices. The Alliance for Responsible Denim project brought leading denim brands, suppliers and stakeholders together to collectively address these issues and take initial steps towards improving the ecological sustainability impact of denim production. Sustainability challenges are considered very complex and economically undesirable for individual companies to address alone. In denim, small and medium sized denim firms face specific challenges, such as lower economies of scale and lower buying power to affect change in practices. There is great benefit in combining denim companies' resources and knowledge so that collective experimentation and learning can lift the sustainability standards of the industry and lead to the development of common standards and benchmarks on a scale that matters. If meaningful, transformative industrial change is to be made, then it calls for collaboration between denim industry stakeholders that goes beyond supplier-buyer relations and includes horizontal value chain collaboration of competing large and small denim brands. However collaboration between organizations, and especially between competitors, is highly complex and prone to failure. The research behind the Alliance for Responsible Denim project asked a central research question: how do competitors effectively collaborate together to create common, industry standards on resource use and benchmarks for improved ecological sustainability? To answer this question, we used a mixed-method, action research approach. The Alliance for Responsible Denim project mobilized and facilitated denim brands to collectively identify ways to reduce the use of water and chemicals in denim production and then aided them to implement these practices individually in their respective firms.