ABSTRACT. It is now generally accepted that the quality of the regulatory arrangements should be appraised not only by looking at the institutional design, but also by evaluating the factual enforcement and implementation of regulations. It is therefore advised that national governments take a more active stance in supervising the regulatory enforcement by different regulatory agencies. However, in some cases, government’s activism might be an impeding factor in regulatory enforcement. That this is not so crazy idea shows the analysis of the regulatory enforcement by Lithuanian Competition Authority in the area of competition policy during the years of integration to the European Union. For example, not only political and financial independence of the Competition Authority was difficult to establish, but also functions and competences of the regulatory agency were changed a number of times, which hampered the effectiveness of the agency’s performance while enforcing the competition law. In addition to often changes of functions, also the scope of competences was changing. As a result, the variety of tasks attributed to the Lithuanian Competition Authority caused the growing overload of work, which further hindered its regulatory practice. The question is who can be blamed for that? Was it just the inexperience of the government who was seeking for the best institutional design and could not stop with redesigning the regulatory agency or was it the intentional behaviour guided by some concrete interests as a result of a regulatory capture? The analysis of the regulatory enforcement during the period of 15 years does not allow for disregarding of the second possibility.
LINK
LINK
Many have suggested that AI-based interventions could enhance learning by personalization, improving teacher effectiveness, or by optimizing educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible scientific experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by research ethics committees do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are (1) A process of gradual testing (2) Risk- and side-effect detection (3) Explainability and severity, and (4) Democratic oversight. These procedures can be used by researchers and ethics committees to enable responsible experiment with AI interventions in educational settings. Implementation and compliance will require collaboration between researchers, industry, policy makers, and educational institutions.
DOCUMENT