A case study and method development research of online simulation gaming to enhance youth care knowlegde exchange. Youth care professionals affirm that the application used has enough relevance as an additional tool for knowledge construction about complex cases. They state that the usability of the application is suitable, however some remarks are given to adapt the virtual environment to the special needs of youth care knowledge exchange. The method of online simulation gaming appears to be useful to improve network competences and to explore the hidden professional capacities of the participant as to the construction of situational cognition, discourse participation and the accountability of intervention choices.
DOCUMENT
When it comes to hard to solve problems, the significance of situational knowledge construction and network coordination must not be underrated. Professional deliberation is directed toward understanding, acting and analysis. We need smart and flexible ways to direct systems information from practice to network reflection, and to guide results from network consultation to practice. This article presents a case study proposal, as follow-up to a recent dissertation about online simulation gaming for youth care network exchange (Van Haaster, 2014).
DOCUMENT
Anaesthesiology residents at Leiden University Medical Center regularly undergo simulation training with a full-body manikin. This is a vital aspect of the clinical programme providing a stressful yet safe environment for effective critical resource management (CRM) training. Unfortunately, the COVID-19 pandemic made real-life simulations challenging due to organizational and preventive measures. As a result, we explored asynchronous training opportunities utilizing a multiplayer virtual reality (VR) simulation. VR simulations can create personalized scenarios, facilitating differentiated learning through enhanced sensory immersion. VR offers full immersion with a high potential for visual effects, simultaneously allowing changes in patient characteristics such as sex, weight, external trauma and age, which is impossible with regular manikin training. The three-step approach involved (1) identifying user requirements, (2) developing a prototype and (3) assessing the projectandapos;s viability and interest for expansion.
MULTIFILE
The IMPULS-2020 project DIGIREAL (BUas, 2021) aims to significantly strengthen BUAS’ Research and Development (R&D) on Digital Realities for the benefit of innovation in our sectoral industries. The project will furthermore help BUas to position itself in the emerging innovation ecosystems on Human Interaction, AI and Interactive Technologies. The pandemic has had a tremendous negative impact on BUas industrial sectors of research: Tourism, Leisure and Events, Hospitality and Facility, Built Environment and Logistics. Our partner industries are in great need of innovative responses to the crises. Data, AI combined with Interactive and Immersive Technologies (Games, VR/AR) can provide a partial solution, in line with the key-enabling technologies of the Smart Industry agenda. DIGIREAL builds upon our well-established expertise and capacity in entertainment and serious games and digital media (VR/AR). It furthermore strengthens our initial plans to venture into Data and Applied AI. Digital Realities offer great opportunities for sectoral industry research and innovation, such as experience measurement in Leisure and Hospitality, data-driven decision-making for (sustainable) tourism, geo-data simulations for Logistics and Digital Twins for Spatial Planning. Although BUas already has successful R&D projects in these areas, the synergy can and should significantly be improved. We propose a coherent one-year Impuls funded package to develop (in 2021): 1. A multi-year R&D program on Digital Realities, that leads to, 2. Strategic R&D proposals, in particular a SPRONG/sleuteltechnologie proposal; 3. Partnerships in the regional and national innovation ecosystem, in particular Mind Labs and Data Development Lab (DDL); 4. A shared Digital Realities Lab infrastructure, in particular hardware/software/peopleware for Augmented and Mixed Reality; 5. Leadership, support and operational capacity to achieve and support the above. The proposal presents a work program and management structure, with external partners in an advisory role.
In this project, the AGM R&D team developed and refined the use of a facial scanning rig. The rig is a physical device comprising multiple cameras and lighting that are mounted on scaffolding around a 'scanning volume'. This is an area at which objects are placed before being photographed from multiple angles. The object is typically a person's head, but it can be anything of this approximate size. Software compares the photographs to create a digital 3D recreation - this process is called photogrammetry. The 3D model is then processed by further pieces of software and eventually becomes a face that can be animated inside in Unreal Engine, which is a popular piece of game development software made by the company Epic. This project was funded by Epic's 'Megagrant' system, and the focus of the work is on streamlining and automating the processing pipeline, and on improving the quality of the resulting output. Additional work has been done on skin shaders (simulating the quality of real skin in a digital form) and the use of AI to re/create lifelike hair styles. The R&D work has produced significant savings in regards to the processing time and the quality of facial scans, has produced a system that has benefitted the educational offering of BUas, and has attracted collaborators from the commercial entertainment/simulation industries. This work complements and extends previous work done on the VIBE project, where the focus was on creating lifelike human avatars for the medical industry.
Automated driving nowadays has become reality with the help of in-vehicle (ADAS) systems. More and more of such systems are being developed by OEMs and service providers. These (partly) automated systems are intended to enhance road and traffic safety (among other benefits) by addressing human limitations such as fatigue, low vigilance/distraction, reaction time, low behavioral adaptation, etc. In other words, (partly) automated driving should relieve the driver from his/her one or more preliminary driving tasks, making the ride enjoyable, safer and more relaxing. The present in-vehicle systems, on the contrary, requires continuous vigilance/alertness and behavioral adaptation from human drivers, and may also subject them to frequent in-and-out-of-the-loop situations and warnings. The tip of the iceberg is the robotic behavior of these in-vehicle systems, contrary to human driving behavior, viz. adaptive according to road, traffic, users, laws, weather, etc. Furthermore, no two human drivers are the same, and thus, do not possess the same driving styles and preferences. So how can one design of robotic behavior of an in-vehicle system be suitable for all human drivers? To emphasize the need for HUBRIS, this project proposes quantifying the behavioral difference between human driver and two in-vehicle systems through naturalistic driving in highway conditions, and subsequently, formulating preliminary design guidelines using the quantified behavioral difference matrix. Partners are V-tron, a service provider and potential developer of in-vehicle systems, Smits Opleidingen, a driving school keen on providing state-of-the-art education and training, Dutch Autonomous Mobility (DAM) B.V., a company active in operations, testing and assessment of self-driving vehicles in the Groningen province, Goudappel Coffeng, consultants in mobility and experts in traffic psychology, and Siemens Industry Software and Services B.V. (Siemens), developers of traffic simulation environments for testing in-vehicle systems.