A case study and method development research of online simulation gaming to enhance youth care knowlegde exchange. Youth care professionals affirm that the application used has enough relevance as an additional tool for knowledge construction about complex cases. They state that the usability of the application is suitable, however some remarks are given to adapt the virtual environment to the special needs of youth care knowledge exchange. The method of online simulation gaming appears to be useful to improve network competences and to explore the hidden professional capacities of the participant as to the construction of situational cognition, discourse participation and the accountability of intervention choices.
DOCUMENT
When it comes to hard to solve problems, the significance of situational knowledge construction and network coordination must not be underrated. Professional deliberation is directed toward understanding, acting and analysis. We need smart and flexible ways to direct systems information from practice to network reflection, and to guide results from network consultation to practice. This article presents a case study proposal, as follow-up to a recent dissertation about online simulation gaming for youth care network exchange (Van Haaster, 2014).
DOCUMENT
Industry 4.0 has placed an emphasis on real-time decision making in the execution of systems, such as semiconductor manufacturing. This article will evaluate a scheduling methodology called Evolutionary Learning Based Simulation Optimization (ELBSO) using data generated by a Manufacturing Execution System (MES) for scheduling a Stochastic Job Shop Scheduling Problem (SJSSP). ELBSO is embedded within Ordinal Optimization (OO), where in the first phase it uses a meta model, which previously was trained by a Discrete Event Simulation model of a SJSSP. The meta model used within ELBSO uses Genetic Programming (GP)-based Machine Learning (ML). Therefore, instead of using the DES model to train and test the meta model, this article uses historical data from a front-end fab to train and test. The results were statistically evaluated for the quality of the fit generated by the meta-model.
DOCUMENT
In recent years, disasters are increasing in numbers, location, intensity and impact; they have become more unpredictable due to climate change, raising questions about disaster preparedness and management. Attempts by government entities at limiting the impact of disasters are insufficient, awareness and action are urgently needed at the citizen level to create awareness, develop capacity, facilitate implementation of management plans and to coordinate local action at times of uncertainty. We need a cultural and behavioral change to create resilient citizens, communities, and environments. To develop and maintain new ways of thinking has to start by anticipating long-term bottom-up resilience and collaborations. We propose to develop a serious game on a physical tabletop that allows individuals and communities to work with a moderator and to simulate disasters and individual and collective action in their locality, to mimic real-world scenarios using game mechanics and to train trainers. Two companies–Stratsims, a company specialized in game development, and Society College, an organization that aims to strengthen society, combine their expertise as changemakers. They work with Professor Carola Hein (TU Delft), who has developed knowledge about questions of disaster and rebuilding worldwide and the conditions for meaningful and long-term disaster preparedness. The partners have already reached out to relevant communities in Amsterdam and the Netherlands, including UNUN, a network of Ukrainians in the Netherlands. Jaap de Goede, an experienced strategy simulation expert, will lead outreach activities in diverse communities to train trainers and moderate workshops. This game will be highly relevant for citizens to help grow awareness and capacity for preparing for and coping with disasters in a bottom-up fashion. The toolkit will be available for download and printing open access, and for purchase. The team will offer training and facilitate workshops working with local communities to initiate bottom-up change in policy making and planning.
The bi-directional communication link with the physical system is one of the main distinguishing features of the Digital Twin paradigm. This continuous flow of data and information, along its entire life cycle, is what makes a Digital Twin a dynamic and evolving entity and not merely a high-fidelity copy. There is an increasing realisation of the importance of a well functioning digital twin in critical infrastructures, such as water networks. Configuration of water network assets, such as valves, pumps, boosters and reservoirs, must be carefully managed and the water flows rerouted, often manually, which is a slow and costly process. The state of the art water management systems assume a relatively static physical model that requires manual corrections. Any change in the network conditions or topology due to degraded control mechanisms, ongoing maintenance, or changes in the external context situation, such as a heat wave, makes the existing model diverge from the reality. Our project proposes a unique approach to real-time monitoring of the water network that can handle automated changes of the model, based on the measured discrepancy of the model with the obtained IoT sensor data. We aim at an evolutionary approach that can apply detected changes to the model and update it in real-time without the need for any additional model validation and calibration. The state of the art deep learning algorithms will be applied to create a machine-learning data-driven simulation of the water network system. Moreover, unlike most research that is focused on detection of network problems and sensor faults, we will investigate the possibility of making a step further and continue using the degraded network and malfunctioning sensors until the maintenance and repairs can take place, which can take a long time. We will create a formal model and analyse the effect on data readings of different malfunctions, to construct a mitigating mechanism that is tailor-made for each malfunction type and allows to continue using the data, albeit in a limited capacity.
The bi-directional communication link with the physical system is one of the main distinguishing features of the Digital Twin paradigm. This continuous flow of data and information, along its entire life cycle, is what makes a Digital Twin a dynamic and evolving entity and not merely a high-fidelity copy. There is an increasing realisation of the importance of a well functioning digital twin in critical infrastructures, such as water networks. Configuration of water network assets, such as valves, pumps, boosters and reservoirs, must be carefully managed and the water flows rerouted, often manually, which is a slow and costly process. The state of the art water management systems assume a relatively static physical model that requires manual corrections. Any change in the network conditions or topology due to degraded control mechanisms, ongoing maintenance, or changes in the external context situation, such as a heat wave, makes the existing model diverge from the reality. Our project proposes a unique approach to real-time monitoring of the water network that can handle automated changes of the model, based on the measured discrepancy of the model with the obtained IoT sensor data. We aim at an evolutionary approach that can apply detected changes to the model and update it in real-time without the need for any additional model validation and calibration. The state of the art deep learning algorithms will be applied to create a machine-learning data-driven simulation of the water network system. Moreover, unlike most research that is focused on detection of network problems and sensor faults, we will investigate the possibility of making a step further and continue using the degraded network and malfunctioning sensors until the maintenance and repairs can take place, which can take a long time. We will create a formal model and analyse the effect on data readings of different malfunctions, to construct a mitigating mechanism that is tailor-made for each malfunction type and allows to continue using the data, albeit in a limited capacity.