While there is much focus on interventions to foster ethical reflection in the design process of AI, there is less focus on fostering ethical reflection for (end)users. Yet, with the rise of genAI, AI technologies are no longer confined to expert users; non-experts are widely using these technologies. In this case study in a governmental organization in the Netherlands, we investigated a bottom-up approach to foster ethical reflection on the use of genAI tools. An approach of guided experimentation, including an intervention with a serious game, allowed civil servants to experiment, to understand the technology and its associated risks. The case study demonstrates that this approach enhances the awareness of possibilities and limitations, and the ethical considerations, of genAI usage. By analyzing usage statistics, we estimated the organization’s energy consumption.
DOCUMENT
In the modern day and age, cybersecurity facesnumerous challenges. Computer systems and networks become more and more sophisticated and interconnected, and the attack surface constantly increases. In addition, cyber-attacks keep growing in complexity and scale. In order to address these challenges, security professionals started to employ generative AI (GenAI) to quickly respond to attacks. However, this introduces challenges in terms of how GenAI can be adapted to the security environment and where the legal and ethical responsibilities lie. The Universities of Twente and Groningen and the Hanze University of Applied Sciences have initiated an interdisciplinary research project to investigate the legal and technical aspects of these LLMs in the cybersecurity domain and develop an advanced AI-powered tool.
DOCUMENT