Many have suggested that AI-based interventions could enhance learning by personalization, improving teacher effectiveness, or by optimizing educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible scientific experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by research ethics committees do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are (1) A process of gradual testing (2) Risk- and side-effect detection (3) Explainability and severity, and (4) Democratic oversight. These procedures can be used by researchers and ethics committees to enable responsible experiment with AI interventions in educational settings. Implementation and compliance will require collaboration between researchers, industry, policy makers, and educational institutions.
DOCUMENT
By analysing intelligence-gathering reform legislation this article discusses access to justice for communications interception by the intelligence and security services. In the aftermath of the Snowden revelations, sophisticated oversight systems for bulk communications surveillance are being established across the globe. In the Netherlands prior judicial consent and a binding complaint procedure have been established. However, although checks and balances for targeted communications interference have been created, accountability mechanisms are less equipped to effectively remedy indiscriminate interference. Therefore, within the context of mass communications surveillance programs, access to justice for complainants remains a contentious issue.
MULTIFILE
De afgelopen jaren is er veel aandacht voor het voorkomen en effectief beslechten van geschillen met de overheid in de fase van besluitvorming en bezwaarbehandeling. De overheid is bezig met de vraag hoe burgers het beste kunnen worden ondersteund in procedures van besluitvorming en bezwaar en haar eigen rol daarbij.1 In dit onderzoek in opdracht van de Raad voor Rechtsbijstand is de vraag aan de orde hoe procedures van besluitvorming en bezwaar zodanig kunnen worden georganiseerd, dat geschillen worden voorkomen en dat ze, mochten ze toch ontstaan, zo snel, effectief en bevredigend mogelijk kunnen worden opgelost.
DOCUMENT