Completeness of data is vital for the decision making and forecasting on Building Management Systems (BMS) as missing data can result in biased decision making down the line. This study creates a guideline for imputing the gaps in BMS datasets by comparing four methods: K Nearest Neighbour algorithm (KNN), Recurrent Neural Network (RNN), Hot Deck (HD) and Last Observation Carried Forward (LOCF). The guideline contains the best method per gap size and scales of measurement. The four selected methods are from various backgrounds and are tested on a real BMS and metereological dataset. The focus of this paper is not to impute every cell as accurately as possible but to impute trends back into the missing data. The performance is characterised by a set of criteria in order to allow the user to choose the imputation method best suited for its needs. The criteria are: Variance Error (VE) and Root Mean Squared Error (RMSE). VE has been given more weight as its ability to evaluate the imputed trend is better than RMSE. From preliminary results, it was concluded that the best K‐values for KNN are 5 for the smallest gap and 100 for the larger gaps. Using a genetic algorithm the best RNN architecture for the purpose of this paper was determined to be GatedRecurrent Units (GRU). The comparison was performed using a different training dataset than the imputation dataset. The results show no consistent link between the difference in Kurtosis or Skewness and imputation performance. The results of the experiment concluded that RNN is best for interval data and HD is best for both nominal and ratio data. There was no single method that was best for all gap sizes as it was dependent on the data to be imputed.
MULTIFILE
Over the past forty years, the use of process models in practice has grown extensively. Until twenty years ago, remarkably little was known about the factors that contribute to the human understandability of process models in practice. Since then, research has, indeed, been conducted on this important topic, by e.g. creating guidelines. Unfortunately, the suggested modelling guidelines often fail to achieve the desired effects, because they are not tied to actual experimental findings. The need arises for knowledge on what kind of visualisation of process models is perceived as understandable, in order to improve the understanding of different stakeholders. Therefore the objective of this study is to answer the question: How can process models be visually enhanced so that they facilitate a common understanding by different stakeholders? Consequently, five subresearch questions (SRQ) will be discussed, covering three studies. By combining social psychology and process models we can work towards a more human-centred and empirical-based solution to enhance the understanding of process models by the different stakeholders with visualisation.
MULTIFILE
Within the context of the Iliad project, the authors present early design mock-ups and resulting technical challenges for a 2D/3D/4D geo-data visualisation application focused on microparticle flows. The Iliad – Digital Twins of the Ocean project (EU Horizon 2020) aims to develop a ‘system of systems’ for creating cutting-edge digital twins of specific sea and ocean areas for diverse purposes related to their sustainable use and protection. One of the Iliad pilots addresses the topic of water quality monitoring by creating an application offering dynamic 2D and 3D visualisations of specifically identified microparticles, initially observed by buoys/sensors deployed at specific locations and whose subsequent flows are modelled by separate software. The main upcoming technical challenges concern the data-driven approach, where the application’s input data is completely obtained through external API-based services offering (near) real-time observed data from buoys/sensors and simulated data emanating from particle transport models
DOCUMENT
Completeness of data is vital for the decision making and forecasting on Building Management Systems (BMS) as missing data can result in biased decision making down the line. This study creates a guideline for imputing the gaps in BMS datasets by comparing four methods: K Nearest Neighbour algorithm (KNN), Recurrent Neural Network (RNN), Hot Deck (HD) and Last Observation Carried Forward (LOCF). The guideline contains the best method per gap size and scales of measurement. The four selected methods are from various backgrounds and are tested on a real BMS and meteorological dataset. The focus of this paper is not to impute every cell as accurately as possible but to impute trends back into the missing data. The performance is characterised by a set of criteria in order to allow the user to choose the imputation method best suited for its needs. The criteria are: Variance Error (VE) and Root Mean Squared Error (RMSE). VE has been given more weight as its ability to evaluate the imputed trend is better than RMSE. From preliminary results, it was concluded that the best K‐values for KNN are 5 for the smallest gap and 100 for the larger gaps. Using a genetic algorithm the best RNN architecture for the purpose of this paper was determined to be Gated Recurrent Units (GRU). The comparison was performed using a different training dataset than the imputation dataset. The results show no consistent link between the difference in Kurtosis or Skewness and imputation performance. The results of the experiment concluded that RNN is best for interval data and HD is best for both nominal and ratio data. There was no single method that was best for all gap sizes as it was dependent on the data to be imputed.
DOCUMENT
With summaries in Dutch, Esperanto and English. DOI: 10.4233/uuid:d7132920-346e-47c6-b754-00dc5672b437 "The subject of this study is deformation analysis of the earth's surface (or part of it) and spatial objects on, above or below it. Such analyses are needed in many domains of society. Geodetic deformation analysis uses various types of geodetic measurements to substantiate statements about changes in geometric positions.Professional practice, e.g. in the Netherlands, regularly applies methods for geodetic deformation analysis that have shortcomings, e.g. because the methods apply substandard analysis models or defective testing methods. These shortcomings hamper communication about the results of deformation analyses with the various parties involved. To improve communication solid analysis models and a common language have to be used, which requires standardisation.Operational demands for geodetic deformation analysis are the reason to formulate in this study seven characteristic elements that a solid analysis model needs to possess. Such a model can handle time series of several epochs. It analyses only size and form, not position and orientation of the reference system; and datum points may be under influence of deformation. The geodetic and physical models are combined in one adjustment model. Full use is made of available stochastic information. Statistical testing and computation of minimal detectable deformations is incorporated. Solution methods can handle rank deficient matrices (both model matrix and cofactor matrix). And, finally, a search for the best hypothesis/model is implemented. Because a geodetic deformation analysis model with all seven elements does not exist, this study develops such a model.For effective standardisation geodetic deformation analysis models need: practical key performance indicators; a clear procedure for using the model; and the possibility to graphically visualise the estimated deformations."
DOCUMENT
from the paper: "This paper presents a research endeavouring to model site work in a 4D BIM model. Next simulations are performed with this model in 5 scenarios including specific interventions in work organisation, notably changing positons of facilities for site workers. A case study has been done in a construction project in the Netherlands. The research has showed the possibility to model time use of site workers in 4D BIM. Next the research has showed potential to perform and calculate specific interventions in the model, and prospect realistic changes in productive time use as a result."
DOCUMENT
The report from Inholland University is dedicated to the impacts of data-driven practices on non-journalistic media production and creative industries. It explores trends, showcases advancements, and highlights opportunities and threats in this dynamic landscape. Examining various stakeholders' perspectives provides actionable insights for navigating challenges and leveraging opportunities. Through curated showcases and analyses, the report underscores the transformative potential of data-driven work while addressing concerns such as copyright issues and AI's role in replacing human artists. The findings culminate in a comprehensive overview that guides informed decision-making in the creative industry.
MULTIFILE
Over the past decade, journalists have created in-depth interactive narratives to provide an alternative to the relentless 24-hour news cycle. Combining different media forms, such as text, audio, video, and data visualisation with the interactive possibilities of digital media, these narratives involve users in the narrative in new ways. In journalism studies, the convergence of different media forms in this manner has gained significant attention. However, interactivity as part of this form has been left underappreciated. In this study, we scrutinise how navigational structure, expressed as navigational cues, shapes user agency in their individual explorations of the narrative. By approaching interactive narratives as story spaces with unique interactive architectures, in this article, we reconstruct the architecture of five Dutch interactive narratives using the walkthrough method. We find that the extensiveness of the interactive architectures can be described on a continuum between closed and open navigational structures that predetermine and thus shape users’ trajectories in diverse ways.
DOCUMENT
Expectations are high for digital technologies to address sustainability related challenges. While research into such applications and the twin transformation is growing rapidly, insights in the actual daily practices of digital sustainability within organizations is lacking. This is problematic as the contributions of digital tools to sustainability goals gain shape in organizational practices. To bridge this gap, we develop a theoretical perspective on digital sustainability practices based on practice theory, with an emphasis on the concept of sociomateriality. We argue that connecting meanings related to sustainability with digital technologies is essential to establish beneficial practices. Next, we contend that the meaning of sustainability is contextspecific, which calls for a local meaning making process. Based on our theoretical exploration we develop an empirical research agenda.
MULTIFILE
In recent years, drones have increasingly supported First Responders (FRs) in monitoring incidents and providing additional information. However, analysing drone footage is time-intensive and cognitively demanding. In this research, we investigate the use of AI models for the detection of humans in drone footage to aid FRs in tasks such as locating victims. Detecting small-scale objects, particularly humans from high altitudes, poses a challenge for AI systems. We present first steps of introducing and evaluating a series of YOLOv8 Convolutional Neural Networks (CNNs) for human detection from drone images. The models are fine-tuned on a created drone image dataset of the Dutch Fire Services and were able to achieve a 53.1% F1-Score, identifying 439 out of 825 humans in the test dataset. These preliminary findings, validated by an incident commander, highlight the promising utility of these models. Ongoing efforts aim to further refine the models and explore additional technologies.
MULTIFILE