Remaining Useful Life (RUL) estimation is directly related with the application of predictive maintenance. When RUL estimation is performed via data-driven methods and Artificial Intelligence algorithms, explainability and interpretability of the model are necessary for trusted predictions. This is especially important when predictive maintenance is applied to gas turbines or aeroengines, as they have high operational and maintenance costs, while their safety standards are strict and highly regulated. The objective of this work is to study the explainability of a Deep Neural Network (DNN) RUL prediction model. An open-source database is used, which is composed by computed measurements through a thermodynamic model for a given turbofan engine, considering non-linear degradation and data points for every second of a full flight cycle. First, the necessary data pre-processing is performed, and a DNN is used for the regression model. The selection of its hyper-parameters is done using random search and Bayesian optimisation. Tests considering the feature selection and the requirements of additional virtual sensors are discussed. The generalisability of the model is performed, showing that the type of faults as well as the dominant degradation has an important effect on the overall accuracy of the model. The explainability and interpretability aspects are studied, following the Local Interpretable Model-agnostic Explanations (LIME) method. The outcomes are showing that for simple data sets, the model can better understand physics, and LIME can give a good explanation. However, as the complexity of the data increases, both the accuracy of the model drops but also LIME seems to have difficulties in giving satisfactory explanations.
The user experience of our daily interactions is increasingly shaped with the aid of AI, mostly as the output of recommendation engines. However, it is less common to present users with possibilities to navigate or adapt such output. In this paper we argue that adding such algorithmic controls can be a potent strategy to create explainable AI and to aid users in building adequate mental models of the system. We describe our efforts to create a pattern library for algorithmic controls: the algorithmic affordances pattern library. The library can aid in bridging research efforts to explore and evaluate algorithmic controls and emerging practices in commercial applications, therewith scaffolding a more evidence-based adoption of algorithmic controls in industry. A first version of the library suggested four distinct categories of algorithmic controls: feeding the algorithm, tuning algorithmic parameters, activating recommendation contexts, and navigating the recommendation space. In this paper we discuss these and reflect on how each of them could aid explainability. Based on this reflection, we unfold a sketch for a future research agenda. The paper also serves as an open invitation to the XAI community to strengthen our approach with things we missed so far.
MULTIFILE
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
Bedrijven, waaronder telecomproviders, vertrouwen steeds meer op complexe AI-systemen. Het gebrek aan interpreteerbaarheid dat zulke systemen vaak introduceren zorgt voor veel uitdagingen om het onderliggende besluitvormingsproces te begrijpen. Vertrouwen in AI-systemen is belangrijk omdat het bijdraagt aan acceptatie en adoptie onder gebruikers. Het vakgebied Explainable AI (XAI) speelt hierbij een cruciale rol door transparantie en uitleg aan gebruikers te bieden voor de beslissingen en werking van zulke systemen.Doel Bij AI-systemen zijn gewoonlijk verschillende stakeholders betrokken, die elk een unieke rol hebben met betrekking tot deze systemen. Als gevolg hiervan varieert de behoefte voor uitleg afhankelijk van wie het systeem gebruikt. Het primaire doel van dit onderzoek is het genereren en evalueren van op stakeholder toegesneden uitleg voor use cases in de telecomindustrie. Door best practices te identificeren, nieuwe explainability tools te ontwikkelen en deze toe te passen in verschillende use cases, is het doel om waardevolle inzichten op te doen. Resultaten Resultaten omvatten het identificeren van de huidige best practices voor het genereren van betekenisvolle uitleg en het ontwikkelen van op maat gemaakte uitleg voor belanghebbenden voor telecom use-cases. Looptijd 01 september 2023 - 30 augustus 2027 Aanpak Het onderzoek begint met een literatuurstudie, gevolgd door de identificatie van mogelijke use-cases en het in kaart brengen van de behoeften van stakeholders. Vervolgens zullen prototypes worden ontwikkeld en hun vermogen om betekenisvolle uitleg te geven, zal worden geëvalueerd.
Bedrijven, waaronder telecomproviders, vertrouwen steeds meer op complexe AI-systemen. Het gebrek aan interpreteerbaarheid dat zulke systemen vaak introduceren zorgt voor veel uitdagingen om het onderliggende besluitvormingsproces te begrijpen. Vertrouwen in AI-systemen is belangrijk omdat het bijdraagt aan acceptatie en adoptie onder gebruikers. Het vakgebied Explainable AI (XAI) speelt hierbij een cruciale rol door transparantie en uitleg aan gebruikers te bieden voor de beslissingen en werking van zulke systemen.