Publinova logo
project

Responsible Applied Artificial Intelligence Trade-Off Dashboard


Description

Organisations are increasingly embedding Artificial Intelligence (AI) techniques and tools in their processes. Typical examples are generative AI for images, videos, text, and classification tasks commonly used, for example, in medical applications and industry. One danger of the proliferation of AI systems is the focus on the performance of AI models, neglecting important aspects such as fairness and sustainability.
For example, an organisation might be tempted to use a model with better global performance, even if it works poorly for specific vulnerable groups. The same logic can be applied to high-performance models that require a significant amount of energy for training and usage. At the same time, many organisations recognise the need for responsible AI development that balances performance with fairness and sustainability.
This KIEM project proposal aims to develop a tool that can be employed by organizations that develop and implement AI systems and aim to do so more responsibly. Through visual aiding and data visualisation, the tool facilitates making these trade-offs. By showing what these values mean in practice, which choices could be made and highlighting the relationship with performance, we aspire to educate users on how the use of different metrics impacts the decisions made by the model and its wider consequences, such as energy consumption or fairness-related harms. This tool is meant to facilitate conversation between developers, product owners and project leaders to assist them in making their choices more explicit and responsible.


Purpose

Organisations across all sectors increasingly employ AI to enhance their daily operations, yet its effectiveness is often judged solely by performance or accuracy. This narrow focus often overlooks quality aspects such as fairness and sustainability, which can cause significant harm when ignored. As awareness of these issues grows, organisations yearn for tooling that assist them develop AI more responsibly. Although such tooling exists, they rarely address the practical trade-offs that arise between quality metrics.To address this gap, we developed a fairness- and sustainability dashboard that helps practitioners (re)design their models and data with fairness and sustainability in mind by visualising the impact of their choices.

Research on existing fairness dashboard tools revealed that few actually help practitioners understand the real-world impact of applying the three main fairness metrics Equalized Odds, Predictive Parity, and Demographic Parity. Most existing tools merely calculate these metrics and return numerical values, offering little insight into how each metric affects different demographic groups. Our interactive dashboard visualizes how feature selection, resource constraints, and correlations between protected attributes influence fairness outcomes across groups. It illustrates the trade-offs between fairness metrics and allows users to experiment with data-centric interventions to better understand their effects, helping stakeholders such as data scientists, product managers, and compliance officers make more informed, fairer AI development choices.

Research on sustainable AI practices shows that while many tools track energy use during training, few help developers optimize already-trained models for lower energy consumption without sacrificing a significant amount of performance, especially for Large Language Models (LLM). The dashboard addresses this gap by predicting how pruning, which is removing less important model parameters, affects both performance and efficiency. Normally this process itself is energy-intensive, but by combining pruning with predictive Surrogate-Based Optimization (SBO), we managed to prevent this. The dashboard visualizes trade-offs between accuracy and sustainability, helping practitioners explore pruning configurations and make informed decisions about balancing performance and environmental impact in AI deployment.


Products

This project has no products


Related editorials

    editorial

    De toekomst van intelligentie: Stefan Leijnen over impact en ethiek van AI


Themes



Project status

Finished

Start date

End date

Region

Not known

SIA file number

HT.KIEM.01.070