Virtual training systems provide highly realistic training environments for police. This study assesses whether a pain stimulus can enhance the training responses and sense of the presence of these systems. Police officers (n = 219) were trained either with or without a pain stimulus in a 2D simulator (VirTra V-300) and a 3D virtual reality (VR) system. Two (training simulator) × 2 (pain stimulus) ANOVAs revealed a significant interaction effect for perceived stress (p =.010, ηp2 =.039). Post-hoc pairwise comparisons showed that VR provokes significantly higher levels of perceived stress compared to VirTra when no pain stimulus is used (p =.009). With a pain stimulus, VirTra training provokes significantly higher levels of perceived stress compared to VirTra training without a pain stimulus (p <.001). Sense of presence was unaffected by the pain stimulus in both training systems. Our results indicate that VR training appears sufficiently realistic without adding a pain stimulus. Practitioner summary: Virtual police training benefits from highly realistic training environments. This study found that adding a pain stimulus heightened perceived stress in a 2D simulator, whereas it influenced neither training responses nor sense of presence in a VR system. VR training appears sufficiently realistic without adding a pain stimulus.
Collaborative Mixed Reality Environments (CMREs) enable designing Performative Mixed Reality Experiences (PMREs) to engage participants’ physical bodies, mixed reality environments, and technologies utilized. However, the physical body is rarely purposefully incorporated throughout such design processes, leaving designers seated behind their desks, relying on their previous know-how and assumptions. In contrast, embodied design techniques from HCI and performing arts afford direct corporeal feedback to verify and adapt experiential aesthetics within the design process. This paper proposes a performative prototyping method, which combines bodystorming methods with Wizard of Oz techniques with a puppeteering approach, using inside-out somaesthetic- and outside-in dramaturgical perspectives. In addition, it suggests an interdisciplinary vocabulary to share and evaluate PMRE experiences during and after its design collaboration. This method is exemplified and investigated by comparing two case studies of PMRE design projects in higher-art education using the existing Social VR platform NEOS VR adapted as a CMRE.
Seven college lecturers and two senior support staff were interviewed during 2021 about their experiences teaching in hybrid virtual classrooms (HVC). These technology-rich learning environments allow teachers to simultaneously teach students who are in class (on campus) and students who are joining remotely (online). There were two reasons for this choice: first, ongoing experimentation from innovative teaching staff who were already using this format before the COVID-19 pandemic; secondly, as a possible solution to restrictions on classroom size imposed by the pandemic. Challenges lecturers faced include adjusting teaching practice and lesson delivery to serve students in the class and those online equally; engaging and linking the different student groups in structured and natural interactions; overcoming technical challenges regarding audio and visual equipment; suitably configuring teaching spaces and having sufficient pedagogical and technical support to manage this complex process. A set of practical suggestions is provided. Lecturers should make reasoned choices when teaching in this format since it requires continued experimentation and practice to enhance the teaching and learning opportunities. When external factors such as classroom size restrictions are the driving force, the specific type of synchronous learning activities should be carefully considered. The structure and approach to lessons needs to be rethought to optimise the affordances of the hybrid virtual and connected classroom. The complexity of using these formats, and the additional time needed to do it properly, should not be underestimated. These findings are consistent with previous literature on this subject. An ongoing dialogue with faculty, support staff and especially students should be an integral part of any further implementation in this format.
Alcohol use disorder (AUD) is a major problem. In the USA alone there are 15 million people with an AUD and more than 950,000 Dutch people drink excessively. Worldwide, 3-8% of all deaths and 5% of all illnesses and injuries are attributable to AUD. Care faces challenges. For example, more than half of AUD patients relapse within a year of treatment. A solution for this is the use of Cue-Exposure-Therapy (CET). Clients are exposed to triggers through objects, people and environments that arouse craving. Virtual Reality (VRET) is used to experience these triggers in a realistic, safe, and personalized way. In this way, coping skills are trained to counteract alcohol cravings. The effectiveness of VRET has been (clinically) proven. However, the advent of AR technologies raises the question of exploring possibilities of Augmented-Reality-Exposure-Therapy (ARET). ARET enjoys the same benefits as VRET (such as a realistic safe experience). But because AR integrates virtual components into the real environment, with the body visible, it presumably evokes a different type of experience. This may increase the ecological validity of CET in treatment. In addition, ARET is cheaper to develop (fewer virtual elements) and clients/clinics have easier access to AR (via smartphone/tablet). In addition, new AR glasses are being developed, which solve disadvantages such as a smartphone screen that is too small. Despite the demand from practitioners, ARET has never been developed and researched around addiction. In this project, the first ARET prototype is developed around AUD in the treatment of alcohol addiction. The prototype is being developed based on Volumetric-Captured-Digital-Humans and made accessible for AR glasses, tablets and smartphones. The prototype will be based on RECOVRY, a VRET around AUD developed by the consortium. A prototype test among (ex)AUD clients will provide insight into needs and points for improvement from patient and care provider and into the effect of ARET compared to VRET.
There is increasing interest for the use of Virtual Reality (VR) in the field of sustainable transportation and urban development. Even though much has been said about the opportunities of using VR technology to enhance design and involve stakeholders in the process, implementations of VR technology are still limited. To bridge this gap, the urban intelligence team of NHTV Breda University of Applied Sciences developed CycleSPEX, a Virtual Reality (VR) simulator for cycling. CycleSpex enables researchers, planners and policy makers to shape a variety of scenarios around knowledge- and design questions and test their impact on users experiences and behaviour, in this case (potential) cyclists. The impact of infrastructure enhancements as well as changes in the surrounding built environment can be tested, analysed an evaluated. The main advantage for planners and policy makers is that the VR environment enables them to test scenarios ex-ante in a safe and controlled setting.“The key to a smart, healthy and safe urban environment lies in engaging mobility. Healthy cities are often characterized by high quality facilities for the active modes. But what contributes to a pleasant cycling experience? CycleSPEX helps us to understand the relations between cyclists on the move and (designed) urban environments”
The objective of DIGIREAL-XL is to build a Research, Development & Innovation (RD&I) Center (SPRONG GROUP, level 4) on Digital Realities (DR) for Societal-Economic Impact. DR are intelligent, interactive, and immersive digital environments that seamlessly integrate Data, Artificial Intelligence/Machine Learning, Modelling-Simulation, and Visualization by using Game and Media Technologies (Game platforms/VR/AR/MR). Examples of these DR disruptive innovations can be seen in many domains, such as in the entertainment and service industries (Digital Humans); in the entertainment, leisure, learning, and culture domain (Virtual Museums and Music festivals) and within the decision making and spatial planning domain (Digital Twins). There are many well-recognized innovations in each of the enabling technologies (Data, AI,V/AR). However, DIGIREAL-XL goes beyond these disconnected state-of-the-art developments and technologies in its focus on DR as an integrated socio-technical concept. This requires pre-commercial, interdisciplinary RD&I, in cross-sectoral and inter-organizational networks. There is a need for integrating theories, methodologies, smart tools, and cross-disciplinary field labs for the effective and efficient design and production of DR. In doing so, DIGIREAL-XL addresses the challenges formulated under the KIA-Enabling Technologies / Key Methodologies for sectoral and societal transformation. BUas (lead partner) and FONTYS built a SPRONG group level 4 based on four pillars: RD&I-Program, Field Labs, Lab-Infrastructure, and Organizational Excellence Program. This provides a solid foundation to initiate and execute challenging, externally funded RD&I projects with partners in SPRONG stage one ('21-'25) and beyond (until' 29). DIGIREAL-XL is organized in a coherent set of Work Packages with clear objectives, tasks, deliverables, and milestones. The SPRONG group is well-positioned within the emerging MINDLABS Interactive Technologies eco-system and strengthens the regional (North-Brabant) digitalization agenda. Field labs on DR work with support and co-funding by many network organizations such as Digishape and Chronosphere and public, private, and societal organizations.