Machine learning models have proven to be reliable methods in classification tasks. However, little research has been done on classifying dwelling characteristics based on smart meter & weather data before. Gaining insights into dwelling characteristics can be helpful to create/improve the policies for creating new dwellings at NZEB standard. This paper compares the different machine learning algorithms and the methods used to correctly implement the models. These methods include the data pre-processing, model validation and evaluation. Smart meter data was provided by Groene Mient, which was used to train several machine learning algorithms. The models that were generated by the algorithms were compared on their performance. The results showed that Recurrent Neural Network (RNN) 2performed the best with 96% of accuracy. Cross Validation was used to validate the models, where 80% of the data was used for training purposes and 20% was used for testing purposes. Evaluation metrices were used to produce classification reports, which can indicate which of the models work the best for this specific problem. The models were programmed in Python.
DOCUMENT
This paper presents a case study where a model predictive control (MPC) logic is developed for energy flexible operation of a space heating system in an educational building. A Long Short-Term Memory Neural Network (LSTM) surrogate model is trained on the output of an EnergyPlus building simulation model. This LSTM model is used within an MPC framework where a genetic algorithm is used to optimize setpoint sequences. The EnergyPlus model is used to validate the performance of the control logic. The MPC approach leads to a substantial reduction in energy consumption (7%) and energy costs (13%) with improved comfort performance. Additional energy costs savings are possible (7–16%) if a sacrifice in indoor thermal comfort is accepted. The presented method is useful for developing MPC systems in the design stages where measured data is typically not available. Additionally, this study illustrates that LSTM models are promising for MPC for buildings.
DOCUMENT
De arbeidsmarkt is continu in ontwikkeling, leidend tot een steeds veranderende vraag naar competenties en banen. Dit vraagt naast beroepsgerichte vaardigheden en kennis over veerkracht en wendbaarheid van professionals. Van de student wordt daarom verwacht dat die zich ontwikkeld in zelfgereguleerd (ZGL) leren. ZGL gaat over regie van het eigen leerproces: studenten bepalen zelf hoe tot leerresultaten te komen, deze te evalueren en sturen het leerproces zelf bij. Voor opleidingen is het de vraag hoe ze ZGL kunnen begeleiden en bevorderen. Dit behoeft inzicht in leergedrag, patronen hierin en bewustzijn over hoe deze inzichten gebruikt kunnen worden om ZGL te ondersteunen en het leerproces te begeleiden. In dit onderzoek is geïnventariseerd of de data die studenten in de elektronische leeromgeving (ELO) achterlaten een indicatie kan geven over het leerproces en ZGL van de student. Om de ingewikkelde patronen uit de data te halen, zijn de data uit de ELO met behulp van AItechnieken geanalyseerd. Hiermee kon het leerproces van studenten in verschillende categorieën worden onderverdeeld. De categorieën geven een eerste indicatie over het ZGL van de student. Verder onderzoek is benodigd, ook om te onderzoeken wat dit betekent voor de ondersteuning van studenten in hun leerproces.
DOCUMENT
In this paper, artificial intelligence tools are implemented in order to predict trajectory positions, as well as channel performance of an optical wireless communications link. Case studies for industrial scenarios are considered to this aim. In a first stage, system parameters are optimized using a hybrid multi-objective optimization (HMO) procedure based on the grey wolf optimizer and the non-sorting genetic algorithm III with the goal of simultaneously maximizing power and spectral efficiency. In a second stage, we demonstrate that a long short-term memory neural network (LSTM) is able to predict positions, as well as channel gain. In this way, the VLC links can be configured with the optimal parameters provided by the HMO. The success of the proposed LSTM architectures was validated by training and test root-mean square error evaluations below 1%.
LINK
Machine learning models have proven to be reliable methods in classification tasks. However, little research has been conducted on the classification of dwelling characteristics based on smart meter and weather data before. Gaining insights into dwelling characteristics, which comprise of the type of heating system used, the number of inhabitants, and the number of solar panels installed, can be helpful in creating or improving the policies to create new dwellings at nearly zero-energy standard. This paper compares different supervised machine learning algorithms, namely Logistic Regression, Support Vector Machine, K-Nearest Neighbor, and Long-short term memory, and methods used to correctly implement these algorithms. These methods include data pre-processing, model validation, and evaluation. Smart meter data, which was used to train several machine learning algorithms, was provided by Groene Mient. The models that were generated by the algorithms were compared on their performance. The results showed that the Long-short term memory performed the best with 96% accuracy. Cross Validation was used to validate the models, where 80% of the data was used for training purposes and 20% was used for testing purposes. Evaluation metrics were used to produce classification reports, which indicates that the Long-short term memory outperforms the compared models on the evaluation metrics for this specific problem.
DOCUMENT
Accurate assessment of rolling resistance is important for wheelchair propulsion analyses. However, the commonly used drag and deceleration tests are reported to underestimate rolling resistance up to 6% due to the (neglected) influence of trunk motion. The first aim of this study was to investigate the accuracy of using trunk and wheelchair kinematics to predict the intra-cyclical load distribution, more particularly front wheel loading, during hand-rim wheelchair propulsion. Secondly, the study compared the accuracy of rolling resistance determined from the predicted load distribution with the accuracy of drag test-based rolling resistance. Twenty-five able-bodied participants performed hand-rim wheelchair propulsion on a large motor-driven treadmill. During the treadmill sessions, front wheel load was assessed with load pins to determine the load distribution between the front and rear wheels. Accordingly, a machine learning model was trained to predict front wheel load from kinematic data. Based on two inertial sensors (attached to the trunk and wheelchair) and the machine learning model, front wheel load was predicted with a mean absolute error (MAE) of 3.8% (or 1.8 kg). Rolling resistance determined from the predicted load distribution (MAE: 0.9%, mean error (ME): 0.1%) was more accurate than drag test-based rolling resistance (MAE: 2.5%, ME: −1.3%).
DOCUMENT
Completeness of data is vital for the decision making and forecasting on Building Management Systems (BMS) as missing data can result in biased decision making down the line. This study creates a guideline for imputing the gaps in BMS datasets by comparing four methods: K Nearest Neighbour algorithm (KNN), Recurrent Neural Network (RNN), Hot Deck (HD) and Last Observation Carried Forward (LOCF). The guideline contains the best method per gap size and scales of measurement. The four selected methods are from various backgrounds and are tested on a real BMS and meteorological dataset. The focus of this paper is not to impute every cell as accurately as possible but to impute trends back into the missing data. The performance is characterised by a set of criteria in order to allow the user to choose the imputation method best suited for its needs. The criteria are: Variance Error (VE) and Root Mean Squared Error (RMSE). VE has been given more weight as its ability to evaluate the imputed trend is better than RMSE. From preliminary results, it was concluded that the best K‐values for KNN are 5 for the smallest gap and 100 for the larger gaps. Using a genetic algorithm the best RNN architecture for the purpose of this paper was determined to be Gated Recurrent Units (GRU). The comparison was performed using a different training dataset than the imputation dataset. The results show no consistent link between the difference in Kurtosis or Skewness and imputation performance. The results of the experiment concluded that RNN is best for interval data and HD is best for both nominal and ratio data. There was no single method that was best for all gap sizes as it was dependent on the data to be imputed.
DOCUMENT
Completeness of data is vital for the decision making and forecasting on Building Management Systems (BMS) as missing data can result in biased decision making down the line. This study creates a guideline for imputing the gaps in BMS datasets by comparing four methods: K Nearest Neighbour algorithm (KNN), Recurrent Neural Network (RNN), Hot Deck (HD) and Last Observation Carried Forward (LOCF). The guideline contains the best method per gap size and scales of measurement. The four selected methods are from various backgrounds and are tested on a real BMS and metereological dataset. The focus of this paper is not to impute every cell as accurately as possible but to impute trends back into the missing data. The performance is characterised by a set of criteria in order to allow the user to choose the imputation method best suited for its needs. The criteria are: Variance Error (VE) and Root Mean Squared Error (RMSE). VE has been given more weight as its ability to evaluate the imputed trend is better than RMSE. From preliminary results, it was concluded that the best K‐values for KNN are 5 for the smallest gap and 100 for the larger gaps. Using a genetic algorithm the best RNN architecture for the purpose of this paper was determined to be GatedRecurrent Units (GRU). The comparison was performed using a different training dataset than the imputation dataset. The results show no consistent link between the difference in Kurtosis or Skewness and imputation performance. The results of the experiment concluded that RNN is best for interval data and HD is best for both nominal and ratio data. There was no single method that was best for all gap sizes as it was dependent on the data to be imputed.
MULTIFILE
This study presents an automated method for detecting and measuring the apex head thickness of tomato plants, a critical phenotypic trait associated with plant health, fruit development, and yield forecasting. Due to the apex's sensitivity to physical contact, non-invasive monitoring is essential. This paper addresses the demand for automated, contactless systems among Dutch growers. Our approach integrates deep learning models (YOLO and Faster RCNN) with RGB-D camera imaging to enable accurate, scalable, and non-invasive measurement in greenhouse environments. A dataset of 600 RGB-D images captured in a controlled greenhouse, was fully preprocessed, annotated, and augmented for optimal training. Experimental results show that YOLOv8n achieved superior performance with a precision of 91.2 %, recall of 86.7 %, and an Intersection over Union (IoU) score of 89.4 %. Other models, such as YOLOv9t, YOLOv10n, YOLOv11n, and Faster RCNN, demonstrated lower precision scores of 83.6 %, 74.6 %, 75.4 %, and 78 %, respectively. Their IoU scores were also lower, indicating less reliable detection. This research establishes a robust, real-time method for precision agriculture through automated apex head thickness measurement.
DOCUMENT
Adverse Outcome Pathways (AOPs) are conceptual frameworks that tie an initial perturbation (molecular initiat- ing event) to a phenotypic toxicological manifestation (adverse outcome), through a series of steps (key events). They provide therefore a standardized way to map and organize toxicological mechanistic information. As such, AOPs inform on key events underlying toxicity, thus supporting the development of New Approach Methodologies (NAMs), which aim to reduce the use of animal testing for toxicology purposes. However, the establishment of a novel AOP relies on the gathering of multiple streams of evidence and infor- mation, from available literature to knowledge databases. Often, this information is in the form of free text, also called unstructured text, which is not immediately digestible by a computer. This information is thus both tedious and increasingly time-consuming to process manually with the growing volume of data available. The advance- ment of machine learning provides alternative solutions to this challenge. To extract and organize information from relevant sources, it seems valuable to employ deep learning Natural Language Processing techniques. We review here some of the recent progress in the NLP field, and show how these techniques have already demonstrated value in the biomedical and toxicology areas. We also propose an approach to efficiently and reliably extract and combine relevant toxicological information from text. This data can be used to map underlying mechanisms that lead to toxicological effects and start building quantitative models, in particular AOPs, ultimately allowing animal-free human-based hazard and risk assessment.
DOCUMENT