Extended Reality (XR) technologies—including virtual reality (VR), augmented reality (AR), and mixed reality (MR)—offer transformative opportunities for education by enabling immersive and interactive learning experiences. In this study, we employed a mixed-methods approach that combined systematic desk research with an expert member check to evaluate existing pedagogical frameworks for XR integration. We analyzed several established models (e.g., TPACK, TIM, SAMR, CAMIL, and DigCompEdu) to assess their strengths and limitations in addressing the unique competencies required for XRsupported teaching. Our results indicate that, while these models offer valuable insights into technology integration, they often fall short in specifying XR-specific competencies. Consequently, we extended the DigCompEdu framework by identifying and refining concrete building blocks for teacher professionalization in XR. The conclusions drawn from this research underscore the necessity for targeted professional development that equips educators with the practical skills needed to effectively implement XR in diverse educational settings, thereby providing actionable strategies for fostering digital innovation in teaching and learning.
MULTIFILE
Artificial Intelligence (AI) is increasingly shaping the way we work, live, and interact, leading to significant developments across various sectors of industry, including media, finance, business services, retail and education. In recent years, numerous high-level principles and guidelines for ‘responsible’ or ‘ethical’ AI have been formulated. However, these theoretical efforts often fall short when it comes to addressing the practical challenges of implementing AI in real-world contexts: Responsible Applied AI. The one-day workshop on Responsible Applied Artificial InTelligence (RAAIT) at HHAI 2024: Hybrid Human AI Systems for the Social Good in Malmö, Sweden, brought together researchers studying various dimensions of Responsible AI in practice.This was the second RAAIT workshop, following the first edition at the 2023 European Conference on Artificial Intelligence (ECAI) in Krakow, Poland.
MULTIFILE
This white paper is presented by the Ethics Working Group of the uNLock Consortium This white paper presents findings of the Ethics Working Group, from the conceptual phase of investigation into the ethical issues of the uNLock solution, providing identity management solutions for sharing and presentation of medical COVID-19 credentials (test results) in the context of healthcare institutions. We have provided an outline of direct and indirect stakeholders for the uNLock solution and mapped values, benefits, and harms to the respective stakeholders. The resulting conceptual framework has allowed us to lay down key norms and principles of Self Sovereign Identity (SSI) in the specific context of uNLock solution. We hope that adherence to these norms and principles could serve as a groundwork for anticipatory mitigation of moral risk and hazards stemming from the implementation of uNLock solution and similar solutions. Our findings suggest that even early stage of conceptual investigation in the framework of Value Sensitive Design (VSD), reveals numerous ethical issues. The proposed implementation of the uNLock app in the healthcare context did not proceed further than prototype stage, thus our investigation was limited to the conceptual stage, and did not involve the practical implementation of VSD method involving translation of norms and values into engineering requirements. Nevertheless, our findings suggest that the implementation of VSD method in this context is a promising approach that helps to identify moral conflicts and risks at a very early stage of technological development of SSI solutions. Furthermore, we would like to stress that in the light of our findings it became painfully obvious that hasty implementation of medical credentials system without thorough ethical assessment, risks creating more ethical issues rather than addressing existing ones.