A review has been completed for a verification and validation (V&V) of the (Excel) BioGas simulator or EBS model. The EBS model calculates the environmental impact of biogas production pathways using Material and Energy Flow Analysis, time dependent dynamics, geographic information, and Life Cycle analysis. Within this article a V&V method is researched, selected and applied to validate the EBS model. Through the use of the method described within this article: mistakes in the model are resolved, the strengths and weaknesses of the model are found, and the concept of the model is tested and strengthened. The validation process does not only improve the model but also helps the modelers in widening their focus and scope. This article can, therefore, also be used in the validation process of similar models. The main result from the V&V process indicates that the EBS model is valid; however, it should be considered as an expert model and should only be used by expert users.
DOCUMENT
The Nutri-Score front-of-pack label, which classifies the nutritional quality of products in one of 5 classes (A to E), is one of the main candidates for standardized front-of-pack labeling in the EU. The algorithm underpinning the Nutri-Score label is derived from the Food Standard Agency (FSA) nutrient profile model, originally a binary model developed to regulate the marketing of foods to children in the UK. This review describes the development and validation process of the Nutri-Score algorithm. While the Nutri-Score label is one of the most studied front-of-pack labels in the EU, its validity and applicability in the European context is still undetermined. For several European countries, content validity (i.e., ability to rank foods according to healthfulness) has been evaluated. Studies showed Nutri-Score's ability to classify foods across the board of the total food supply, but did not show the actual healthfulness of products within different classes. Convergent validity (i.e., ability to categorize products in a similar way as other systems such as dietary guidelines) was assessed with the French dietary guidelines; further adaptations of the Nutri-Score algorithm seem needed to ensure alignment with food-based dietary guidelines across the EU. Predictive validity (i.e., ability to predict disease risk when applied to population dietary data) could be re-assessed after adaptations are made to the algorithm. Currently, seven countries have implemented or aim to implement Nutri-Score. These countries appointed an international scientific committee to evaluate Nutri-Score, its underlying algorithm and its applicability in a European context. With this review, we hope to contribute to the scientific and political discussions with respect to nutrition labeling in the EU.
MULTIFILE
Due to a lack of transparency in both algorithm and validation methodology, it is diffcult for researchers and clinicians to select the appropriate tracker for their application. The aim of this work is to transparently present an adjustable physical activity classification algorithm that discriminates between dynamic, standing, and sedentary behavior. By means of easily adjustable parameters, the algorithm performance can be optimized for applications using different target populations and locations for tracker wear. Concerning an elderly target population with a tracker worn on the upper leg, the algorithm is optimized and validated under simulated free-living conditions. The fixed activity protocol (FAP) is performed by 20 participants; the simulated free-living protocol (SFP) involves another 20. Data segmentation window size and amount of physical activity threshold are optimized. The sensor orientation threshold does not vary. The validation of the algorithm is performed on 10 participants who perform the FAP and on 10 participants who perform the SFP. Percentage error (PE) and absolute percentage error (APE) are used to assess the algorithm performance. Standing and sedentary behavior are classified within acceptable limits (+/- 10% error) both under fixed and simulated free-living conditions. Dynamic behavior is within acceptable limits under fixed conditions but has some limitations under simulated free-living conditions. We propose that this approach should be adopted by developers of activity trackers to facilitate the activity tracker selection process for researchers and clinicians. Furthermore, we are convinced that the adjustable algorithm potentially could contribute to the fast realization of new applications.
DOCUMENT
Huntington’s disease (HD) and various spinocerebellar ataxias (SCA) are autosomal dominantly inherited neurodegenerative disorders caused by a CAG repeat expansion in the disease-related gene1. The impact of HD and SCA on families and individuals is enormous and far reaching, as patients typically display first symptoms during midlife. HD is characterized by unwanted choreatic movements, behavioral and psychiatric disturbances and dementia. SCAs are mainly characterized by ataxia but also other symptoms including cognitive deficits, similarly affecting quality of life and leading to disability. These problems worsen as the disease progresses and affected individuals are no longer able to work, drive, or care for themselves. It places an enormous burden on their family and caregivers, and patients will require intensive nursing home care when disease progresses, and lifespan is reduced. Although the clinical and pathological phenotypes are distinct for each CAG repeat expansion disorder, it is thought that similar molecular mechanisms underlie the effect of expanded CAG repeats in different genes. The predicted Age of Onset (AO) for both HD, SCA1 and SCA3 (and 5 other CAG-repeat diseases) is based on the polyQ expansion, but the CAG/polyQ determines the AO only for 50% (see figure below). A large variety on AO is observed, especially for the most common range between 40 and 50 repeats11,12. Large differences in onset, especially in the range 40-50 CAGs not only imply that current individual predictions for AO are imprecise (affecting important life decisions that patients need to make and also hampering assessment of potential onset-delaying intervention) but also do offer optimism that (patient-related) factors exist that can delay the onset of disease.To address both items, we need to generate a better model, based on patient-derived cells that generates parameters that not only mirror the CAG-repeat length dependency of these diseases, but that also better predicts inter-patient variations in disease susceptibility and effectiveness of interventions. Hereto, we will use a staggered project design as explained in 5.1, in which we first will determine which cellular and molecular determinants (referred to as landscapes) in isogenic iPSC models are associated with increased CAG repeat lengths using deep-learning algorithms (DLA) (WP1). Hereto, we will use a well characterized control cell line in which we modify the CAG repeat length in the endogenous ataxin-1, Ataxin-3 and Huntingtin gene from wildtype Q repeats to intermediate to adult onset and juvenile polyQ repeats. We will next expand the model with cells from the 3 (SCA1, SCA3, and HD) existing and new cohorts of early-onset, adult-onset and late-onset/intermediate repeat patients for which, besides accurate AO information, also clinical parameters (MRI scans, liquor markers etc) will be (made) available. This will be used for validation and to fine-tune the molecular landscapes (again using DLA) towards the best prediction of individual patient related clinical markers and AO (WP3). The same models and (most relevant) landscapes will also be used for evaluations of novel mutant protein lowering strategies as will emerge from WP4.This overall development process of landscape prediction is an iterative process that involves (a) data processing (WP5) (b) unsupervised data exploration and dimensionality reduction to find patterns in data and create “labels” for similarity and (c) development of data supervised Deep Learning (DL) models for landscape prediction based on the labels from previous step. Each iteration starts with data that is generated and deployed according to FAIR principles, and the developed deep learning system will be instrumental to connect these WPs. Insights in algorithm sensitivity from the predictive models will form the basis for discussion with field experts on the distinction and phenotypic consequences. While full development of accurate diagnostics might go beyond the timespan of the 5 year project, ideally our final landscapes can be used for new genetic counselling: when somebody is positive for the gene, can we use his/her cells, feed it into the generated cell-based model and better predict the AO and severity? While this will answer questions from clinicians and patient communities, it will also generate new ones, which is why we will study the ethical implications of such improved diagnostics in advance (WP6).
The bi-directional communication link with the physical system is one of the main distinguishing features of the Digital Twin paradigm. This continuous flow of data and information, along its entire life cycle, is what makes a Digital Twin a dynamic and evolving entity and not merely a high-fidelity copy. There is an increasing realisation of the importance of a well functioning digital twin in critical infrastructures, such as water networks. Configuration of water network assets, such as valves, pumps, boosters and reservoirs, must be carefully managed and the water flows rerouted, often manually, which is a slow and costly process. The state of the art water management systems assume a relatively static physical model that requires manual corrections. Any change in the network conditions or topology due to degraded control mechanisms, ongoing maintenance, or changes in the external context situation, such as a heat wave, makes the existing model diverge from the reality. Our project proposes a unique approach to real-time monitoring of the water network that can handle automated changes of the model, based on the measured discrepancy of the model with the obtained IoT sensor data. We aim at an evolutionary approach that can apply detected changes to the model and update it in real-time without the need for any additional model validation and calibration. The state of the art deep learning algorithms will be applied to create a machine-learning data-driven simulation of the water network system. Moreover, unlike most research that is focused on detection of network problems and sensor faults, we will investigate the possibility of making a step further and continue using the degraded network and malfunctioning sensors until the maintenance and repairs can take place, which can take a long time. We will create a formal model and analyse the effect on data readings of different malfunctions, to construct a mitigating mechanism that is tailor-made for each malfunction type and allows to continue using the data, albeit in a limited capacity.
The bi-directional communication link with the physical system is one of the main distinguishing features of the Digital Twin paradigm. This continuous flow of data and information, along its entire life cycle, is what makes a Digital Twin a dynamic and evolving entity and not merely a high-fidelity copy. There is an increasing realisation of the importance of a well functioning digital twin in critical infrastructures, such as water networks. Configuration of water network assets, such as valves, pumps, boosters and reservoirs, must be carefully managed and the water flows rerouted, often manually, which is a slow and costly process. The state of the art water management systems assume a relatively static physical model that requires manual corrections. Any change in the network conditions or topology due to degraded control mechanisms, ongoing maintenance, or changes in the external context situation, such as a heat wave, makes the existing model diverge from the reality. Our project proposes a unique approach to real-time monitoring of the water network that can handle automated changes of the model, based on the measured discrepancy of the model with the obtained IoT sensor data. We aim at an evolutionary approach that can apply detected changes to the model and update it in real-time without the need for any additional model validation and calibration. The state of the art deep learning algorithms will be applied to create a machine-learning data-driven simulation of the water network system. Moreover, unlike most research that is focused on detection of network problems and sensor faults, we will investigate the possibility of making a step further and continue using the degraded network and malfunctioning sensors until the maintenance and repairs can take place, which can take a long time. We will create a formal model and analyse the effect on data readings of different malfunctions, to construct a mitigating mechanism that is tailor-made for each malfunction type and allows to continue using the data, albeit in a limited capacity.