The built environment requires energy-flexible buildings to reduce energy peak loads and to maximize the use of (decentralized) renewable energy sources. The challenge is to arrive at smart control strategies that respond to the increasing variations in both the energy demand as well as the variable energy supply. This enables grid integration in existing energy networks with limited capacity and maximises use of decentralized sustainable generation. Buildings can play a key role in the optimization of the grid capacity by applying demand-side management control. To adjust the grid energy demand profile of a building without compromising the user requirements, the building should acquire some energy flexibility capacity. The main ambition of the Brains for Buildings Work Package 2 is to develop smart control strategies that use the operational flexibility of non-residential buildings to minimize energy costs, reduce emissions and avoid spikes in power network load, without compromising comfort levels. To realise this ambition the following key components will be developed within the B4B WP2: (A) Development of open-source HVAC and electric services models, (B) development of energy demand prediction models and (C) development of flexibility management control models. This report describes the developed first two key components, (A) and (B). This report presents different prediction models covering various building components. The models are from three different types: white box models, grey-box models, and black-box models. Each model developed is presented in a different chapter. The chapters start with the goal of the prediction model, followed by the description of the model and the results obtained when applied to a case study. The models developed are two approaches based on white box models (1) White box models based on Modelica libraries for energy prediction of a building and its components and (2) Hybrid predictive digital twin based on white box building models to predict the dynamic energy response of the building and its components. (3) Using CO₂ monitoring data to derive either ventilation flow rate or occupancy. (4) Prediction of the heating demand of a building. (5) Feedforward neural network model to predict the building energy usage and its uncertainty. (6) Prediction of PV solar production. The first model aims to predict the energy use and energy production pattern of different building configurations with open-source software, OpenModelica, and open-source libraries, IBPSA libraries. The white-box model simulation results are used to produce design and control advice for increasing the building energy flexibility. The use of the libraries for making a model has first been tested in a simple residential unit, and now is being tested in a non-residential unit, the Haagse Hogeschool building. The lessons learned show that it is possible to model a building by making use of a combination of libraries, however the development of the model is very time consuming. The test also highlighted the need for defining standard scenarios to test the energy flexibility and the need for a practical visualization if the simulation results are to be used to give advice about potential increase of the energy flexibility. The goal of the hybrid model, which is based on a white based model for the building and systems and a data driven model for user behaviour, is to predict the energy demand and energy supply of a building. The model's application focuses on the use case of the TNO building at Stieltjesweg in Delft during a summer period, with a specific emphasis on cooling demand. Preliminary analysis shows that the monitoring results of the building behaviour is in line with the simulation results. Currently, development is in progress to improve the model predictions by including the solar shading from surrounding buildings, models of automatic shading devices, and model calibration including the energy use of the chiller. The goal of the third model is to derive recent and current ventilation flow rate over time based on monitoring data on CO₂ concentration and occupancy, as well as deriving recent and current occupancy over time, based on monitoring data on CO₂ concentration and ventilation flow rate. The grey-box model used is based on the GEKKO python tool. The model was tested with the data of 6 Windesheim University of Applied Sciences office rooms. The model had low precision deriving the ventilation flow rate, especially at low CO2 concentration rates. The model had a good precision deriving occupancy from CO₂ concentration and ventilation flow rate. Further research is needed to determine if these findings apply in different situations, such as meeting spaces and classrooms. The goal of the fourth chapter is to compare the working of a simplified white box model and black-box model to predict the heating energy use of a building. The aim is to integrate these prediction models in the energy management system of SME buildings. The two models have been tested with data from a residential unit since at the time of the analysis the data of a SME building was not available. The prediction models developed have a low accuracy and in their current form cannot be integrated in an energy management system. In general, black-box model prediction obtained a higher accuracy than the white box model. The goal of the fifth model is to predict the energy use in a building using a black-box model and measure the uncertainty in the prediction. The black-box model is based on a feed-forward neural network. The model has been tested with the data of two buildings: educational and commercial buildings. The strength of the model is in the ensemble prediction and the realization that uncertainty is intrinsically present in the data as an absolute deviation. Using a rolling window technique, the model can predict energy use and uncertainty, incorporating possible building-use changes. The testing in two different cases demonstrates the applicability of the model for different types of buildings. The goal of the sixth and last model developed is to predict the energy production of PV panels in a building with the use of a black-box model. The choice for developing the model of the PV panels is based on the analysis of the main contributors of the peak energy demand and peak energy delivery in the case of the DWA office building. On a fault free test set, the model meets the requirements for a calibrated model according to the FEMP and ASHRAE criteria for the error metrics. According to the IPMVP criteria the model should be improved further. The results of the performance metrics agree in range with values as found in literature. For accurate peak prediction a year of training data is recommended in the given approach without lagged variables. This report presents the results and lessons learned from implementing white-box, grey-box and black-box models to predict energy use and energy production of buildings or of variables directly related to them. Each of the models has its advantages and disadvantages. Further research in this line is needed to develop the potential of this approach.
DOCUMENT
The full potential of predictive maintenance has not yet been utilised. Current solutions focus on individual steps of the predictive maintenance cycle and only work for very specific settings. The overarching challenge of predictive maintenance is to leverage these individual building blocks to obtain a framework that supports optimal maintenance and asset management. The PrimaVera project has identified four obstacles to tackle in order to utilise predictive maintenance at its full potential: lack of orchestration and automation of the predictive maintenance workflow, inaccurate or incomplete data and the role of human and organisational factors in data-driven decision support tools. Furthermore, an intuitive generic applicable predictive maintenance process model is presented in this paper to provide a structured way of deploying predictive maintenance solutions https://doi.org/10.3390/app10238348 LinkedIn: https://www.linkedin.com/in/john-bolte-0856134/
DOCUMENT
Routine immunization (RI) of children is the most effective and timely public health intervention for decreasing child mortality rates around the globe. Pakistan being a low-and-middle-income-country (LMIC) has one of the highest child mortality rates in the world occurring mainly due to vaccine-preventable diseases (VPDs). For improving RI coverage, a critical need is to establish potential RI defaulters at an early stage, so that appropriate interventions can be targeted towards such population who are identified to be at risk of missing on their scheduled vaccine uptakes. In this paper, a machine learning (ML) based predictive model has been proposed to predict defaulting and non-defaulting children on upcoming immunization visits and examine the effect of its underlying contributing factors. The predictive model uses data obtained from Paigham-e-Sehat study having immunization records of 3,113 children. The design of predictive model is based on obtaining optimal results across accuracy, specificity, and sensitivity, to ensure model outcomes remain practically relevant to the problem addressed. Further optimization of predictive model is obtained through selection of significant features and removing data bias. Nine machine learning algorithms were applied for prediction of defaulting children for the next immunization visit. The results showed that the random forest model achieves the optimal accuracy of 81.9% with 83.6% sensitivity and 80.3% specificity. The main determinants of vaccination coverage were found to be vaccine coverage at birth, parental education, and socio-economic conditions of the defaulting group. This information can assist relevant policy makers to take proactive and effective measures for developing evidence based targeted and timely interventions for defaulting children.
MULTIFILE
Huntington’s disease (HD) and various spinocerebellar ataxias (SCA) are autosomal dominantly inherited neurodegenerative disorders caused by a CAG repeat expansion in the disease-related gene1. The impact of HD and SCA on families and individuals is enormous and far reaching, as patients typically display first symptoms during midlife. HD is characterized by unwanted choreatic movements, behavioral and psychiatric disturbances and dementia. SCAs are mainly characterized by ataxia but also other symptoms including cognitive deficits, similarly affecting quality of life and leading to disability. These problems worsen as the disease progresses and affected individuals are no longer able to work, drive, or care for themselves. It places an enormous burden on their family and caregivers, and patients will require intensive nursing home care when disease progresses, and lifespan is reduced. Although the clinical and pathological phenotypes are distinct for each CAG repeat expansion disorder, it is thought that similar molecular mechanisms underlie the effect of expanded CAG repeats in different genes. The predicted Age of Onset (AO) for both HD, SCA1 and SCA3 (and 5 other CAG-repeat diseases) is based on the polyQ expansion, but the CAG/polyQ determines the AO only for 50% (see figure below). A large variety on AO is observed, especially for the most common range between 40 and 50 repeats11,12. Large differences in onset, especially in the range 40-50 CAGs not only imply that current individual predictions for AO are imprecise (affecting important life decisions that patients need to make and also hampering assessment of potential onset-delaying intervention) but also do offer optimism that (patient-related) factors exist that can delay the onset of disease.To address both items, we need to generate a better model, based on patient-derived cells that generates parameters that not only mirror the CAG-repeat length dependency of these diseases, but that also better predicts inter-patient variations in disease susceptibility and effectiveness of interventions. Hereto, we will use a staggered project design as explained in 5.1, in which we first will determine which cellular and molecular determinants (referred to as landscapes) in isogenic iPSC models are associated with increased CAG repeat lengths using deep-learning algorithms (DLA) (WP1). Hereto, we will use a well characterized control cell line in which we modify the CAG repeat length in the endogenous ataxin-1, Ataxin-3 and Huntingtin gene from wildtype Q repeats to intermediate to adult onset and juvenile polyQ repeats. We will next expand the model with cells from the 3 (SCA1, SCA3, and HD) existing and new cohorts of early-onset, adult-onset and late-onset/intermediate repeat patients for which, besides accurate AO information, also clinical parameters (MRI scans, liquor markers etc) will be (made) available. This will be used for validation and to fine-tune the molecular landscapes (again using DLA) towards the best prediction of individual patient related clinical markers and AO (WP3). The same models and (most relevant) landscapes will also be used for evaluations of novel mutant protein lowering strategies as will emerge from WP4.This overall development process of landscape prediction is an iterative process that involves (a) data processing (WP5) (b) unsupervised data exploration and dimensionality reduction to find patterns in data and create “labels” for similarity and (c) development of data supervised Deep Learning (DL) models for landscape prediction based on the labels from previous step. Each iteration starts with data that is generated and deployed according to FAIR principles, and the developed deep learning system will be instrumental to connect these WPs. Insights in algorithm sensitivity from the predictive models will form the basis for discussion with field experts on the distinction and phenotypic consequences. While full development of accurate diagnostics might go beyond the timespan of the 5 year project, ideally our final landscapes can be used for new genetic counselling: when somebody is positive for the gene, can we use his/her cells, feed it into the generated cell-based model and better predict the AO and severity? While this will answer questions from clinicians and patient communities, it will also generate new ones, which is why we will study the ethical implications of such improved diagnostics in advance (WP6).
The postdoc candidate, Sondos Saad, will strengthen connections between research groups Asset Management(AM), Data Science(DS) and Civil Engineering bachelor programme(CE) of HZ. The proposed research aims at deepening the knowledge about the complex multidisciplinary performance deterioration prediction of turbomachinery to optimize cleaning costs, decrease failure risk and promote the efficient use of water &energy resources. It targets the key challenges faced by industries, oil &gas refineries, utility companies in the adoption of circular maintenance. The study of AM is already part of CE curriculum, but the ambition of this postdoc is that also AM principles are applied and visible. Therefore, from the first year of the programme, the postdoc will develop an AM material science line and will facilitate applied research experiences for students, in collaboration with engineering companies, operation &maintenance contractors and governmental bodies. Consequently, a new generation of efficient sustainability sensitive civil engineers could be trained, as the labour market requires. The subject is broad and relevant for the future of our built environment being more sustainable with less CO2 footprint, with possible connections with other fields of study, such as Engineering, Economics &Chemistry. The project is also strongly contributing to the goals of the National Science Agenda(NWA), in themes of “Circulaire economie en grondstoffenefficiëntie”,”Meten en detecteren: altijd, alles en overall” &”Smart Industry”. The final products will be a framework for data-driven AM to determine and quantify key parameters of degradation in performance for predictive AM strategies, for the application as a diagnostic decision-support toolbox for optimizing cleaning &maintenance; a portfolio of applications &examples; and a new continuous learning line about AM within CE curriculum. The postdoc will be mentored and supervised by the Lector of AM research group and by the study programme coordinator(SPC). The personnel policy and job function series of HZ facilitates the development opportunity.
Predictive maintenance, using data of thousands of sensors already available, is key for optimizing the maintenance schedule and further prevention of unexpected failures in industry.Current maintenance concepts (in the maritime industry) are based on a fixed maintenance interval for each piece of equipment with enough safety margin to minimize incidents. This means that maintenance is most of the time carried out too early and sometimes too late. This is in particular true for maintenance on maritime equipment, where onshore maintenance is strongly preferred over offshore maintenance and needs to be aligned with the vessel’s operations schedule. However, state-of-the-art predictive maintenance methods rely on black-box machine learning techniques such as deep neural networks that are difficult to interpret and are difficult to accept and work with for the maintenance engineers. The XAIPre project (pronounce Xyper) aims at developing Explainable Predictive Maintenance algorithms that do not only provide the engineers with a prediction but in addition, with a risk analysis on the components when delaying the maintenance, and what the primary indicators are that the algorithms use to create inference. To use predictive maintenance effectively in Maritime operations, the predictive models and also the optimization of the maintenance schedule using these models, need to be aware of the past and planned vessel activities, since different activities affect the lifetime of the machines differently. For example, the degradation of a hydraulic pump inside a crane depends on the type of operations the crane but also the vessel is performing. Thus the models do not only need to be explainable but they also need to be aware of the context which is in this case the vessel and machinery activity. Using sensor data processing and edge-computing technologies that will be developed and applied by the Hanze University of Applied Sciences in Groningen (Hanze UAS), context information is extracted from the raw sensor data. The XAIPre project combines these Explainable Context Aware Machine Learning models with state-of-the-art optimizers, that are already developed and available from the NWO CIMPLO project at LIACS, in order to develop optimal maintenance schedules for machine components. The resulting XAIPre prototype offers significant competitive advantages for maritime companies such as Heerema, by increasing the longevity of machine components, increasing worker safety and decreasing maintenance costs.