© 2025 SURF
We investigate whether the automatic generation of questions from an ontology leads to a trustworthy determination of a situation. With our Situation Awareness Question Generator (SAQG) we automatically generate questions from an ontology. The experiment shows that people with no previous experience can characterize hectic situations rather fast and trustworthy. When humans are participating as a sensor to gather information it is important to use basic concepts of perception and thought.
DOCUMENT
Traditional information systems for crisis response and management are centralized systems with a rigid hierarchical structure. Here we propose a decentralized system, which allows citizens to play a significant role as information source and/or as helpers during the initial stages of a crisis. In our approach different roles are assigned to citizens. To be able to designate the different roles automatically our system needs to generate appropriate questions. On the basis of information theory and a restricted role ontology we formalized the process of question generation. Three consecutive experiments were conducted with human users to evaluate to what extent the questioning process resulted in appropriate role determination. The result showed that the mental model of human users does not always comply with the formal model underpinning the questions generation process.
DOCUMENT
Using an ontology to automatically generate questions for ordinary people requires a structure and concepts com- pliant with human thought. Here we present methods to develop a pragmatic, expert-based and a basic-level ontology and a framework to evaluate these ontologies. Comparing these ontologies shows that expert-based ontologies are most easy to con- struct but lack required cognitive semantic characteristics. Basic-level ontologies have structure and concepts which are better in terms of cognitive semantics but are most expensive to construct.
DOCUMENT
The design of healthcare facilities is a complex and dynamic process, which involves many stakeholders each with their own set of needs. In the context of healthcare facilities, this complexity exists at the intersection of technology and society because the very design of these buildings forces us to consider the technology–human interface directly in terms of living-space, ethics and social priorities. In order to grasp this complexity, current healthcare design models need mechanisms to help prioritize the needs of the stakeholders. Assistance in this process can be derived by incorporating elements of technology philosophy into existing design models. In this article, we develop and examine the Inclusive and Integrated Health Facilities Design model (In2Health Design model) and its foundations. This model brings together three existing approaches: (i) the International Classification of Functioning, Disability and Health, (ii) the Model of Integrated Building Design, and (iii) the ontology by Dooyeweerd. The model can be used to analyze the needs of the various stakeholders, in relationship to the required performances of a building as delivered by various building systems. The applicability of the In2Health Design model is illustrated by two case studies concerning (i) the evaluation of the indoor environment for older people with dementia and (ii) the design process of the redevelopment of an existing hospital for psychiatric patients.
DOCUMENT
In 2015, the Object Management Group published the Decision Model and Notation with the goal to structure and connect business processes, decisions and underlying business logic. Practice shows that several vendors adopted the DMN standard and (started to) integrate the standard with their tooling. However, practice also shows that there are vendors who (consciously) deviate from the DMN standard while still trying to achieve the goal DMN is set out to reach. This research aims to 1) analyze and benchmark available tooling and their accompanied languages according to the DMN-standard and 2) understand the different approaches to modeling decisions and underlying business logic of these vendor specific languages. We achieved the above by analyzing secondary data. In total, 22 decision modelling tools together with their languages were analyzed. The results of this study reveal six propositions with regards to the adoption of DMN with regards to the sample of tools. These results could be utilized to improve the tools as well as the DMN standard itself to improve adoption. Possible future research directions comprise the improvement of the generalizability of the results by including more tools available and utilizing different methods for the data collection and analysis as well as deeper analysis into the generation of DMN directly from tool-native languages.
DOCUMENT
The built environment requires energy-flexible buildings to reduce energy peak loads and to maximize the use of (decentralized) renewable energy sources. The challenge is to arrive at smart control strategies that respond to the increasing variations in both the energy demand as well as the variable energy supply. This enables grid integration in existing energy networks with limited capacity and maximises use of decentralized sustainable generation. Buildings can play a key role in the optimization of the grid capacity by applying demand-side management control. To adjust the grid energy demand profile of a building without compromising the user requirements, the building should acquire some energy flexibility capacity. The main ambition of the Brains for Buildings Work Package 2 is to develop smart control strategies that use the operational flexibility of non-residential buildings to minimize energy costs, reduce emissions and avoid spikes in power network load, without compromising comfort levels. To realise this ambition the following key components will be developed within the B4B WP2: (A) Development of open-source HVAC and electric services models, (B) development of energy demand prediction models and (C) development of flexibility management control models. This report describes the developed first two key components, (A) and (B). This report presents different prediction models covering various building components. The models are from three different types: white box models, grey-box models, and black-box models. Each model developed is presented in a different chapter. The chapters start with the goal of the prediction model, followed by the description of the model and the results obtained when applied to a case study. The models developed are two approaches based on white box models (1) White box models based on Modelica libraries for energy prediction of a building and its components and (2) Hybrid predictive digital twin based on white box building models to predict the dynamic energy response of the building and its components. (3) Using CO₂ monitoring data to derive either ventilation flow rate or occupancy. (4) Prediction of the heating demand of a building. (5) Feedforward neural network model to predict the building energy usage and its uncertainty. (6) Prediction of PV solar production. The first model aims to predict the energy use and energy production pattern of different building configurations with open-source software, OpenModelica, and open-source libraries, IBPSA libraries. The white-box model simulation results are used to produce design and control advice for increasing the building energy flexibility. The use of the libraries for making a model has first been tested in a simple residential unit, and now is being tested in a non-residential unit, the Haagse Hogeschool building. The lessons learned show that it is possible to model a building by making use of a combination of libraries, however the development of the model is very time consuming. The test also highlighted the need for defining standard scenarios to test the energy flexibility and the need for a practical visualization if the simulation results are to be used to give advice about potential increase of the energy flexibility. The goal of the hybrid model, which is based on a white based model for the building and systems and a data driven model for user behaviour, is to predict the energy demand and energy supply of a building. The model's application focuses on the use case of the TNO building at Stieltjesweg in Delft during a summer period, with a specific emphasis on cooling demand. Preliminary analysis shows that the monitoring results of the building behaviour is in line with the simulation results. Currently, development is in progress to improve the model predictions by including the solar shading from surrounding buildings, models of automatic shading devices, and model calibration including the energy use of the chiller. The goal of the third model is to derive recent and current ventilation flow rate over time based on monitoring data on CO₂ concentration and occupancy, as well as deriving recent and current occupancy over time, based on monitoring data on CO₂ concentration and ventilation flow rate. The grey-box model used is based on the GEKKO python tool. The model was tested with the data of 6 Windesheim University of Applied Sciences office rooms. The model had low precision deriving the ventilation flow rate, especially at low CO2 concentration rates. The model had a good precision deriving occupancy from CO₂ concentration and ventilation flow rate. Further research is needed to determine if these findings apply in different situations, such as meeting spaces and classrooms. The goal of the fourth chapter is to compare the working of a simplified white box model and black-box model to predict the heating energy use of a building. The aim is to integrate these prediction models in the energy management system of SME buildings. The two models have been tested with data from a residential unit since at the time of the analysis the data of a SME building was not available. The prediction models developed have a low accuracy and in their current form cannot be integrated in an energy management system. In general, black-box model prediction obtained a higher accuracy than the white box model. The goal of the fifth model is to predict the energy use in a building using a black-box model and measure the uncertainty in the prediction. The black-box model is based on a feed-forward neural network. The model has been tested with the data of two buildings: educational and commercial buildings. The strength of the model is in the ensemble prediction and the realization that uncertainty is intrinsically present in the data as an absolute deviation. Using a rolling window technique, the model can predict energy use and uncertainty, incorporating possible building-use changes. The testing in two different cases demonstrates the applicability of the model for different types of buildings. The goal of the sixth and last model developed is to predict the energy production of PV panels in a building with the use of a black-box model. The choice for developing the model of the PV panels is based on the analysis of the main contributors of the peak energy demand and peak energy delivery in the case of the DWA office building. On a fault free test set, the model meets the requirements for a calibrated model according to the FEMP and ASHRAE criteria for the error metrics. According to the IPMVP criteria the model should be improved further. The results of the performance metrics agree in range with values as found in literature. For accurate peak prediction a year of training data is recommended in the given approach without lagged variables. This report presents the results and lessons learned from implementing white-box, grey-box and black-box models to predict energy use and energy production of buildings or of variables directly related to them. Each of the models has its advantages and disadvantages. Further research in this line is needed to develop the potential of this approach.
DOCUMENT
To study the ways in which compounds can induce adverse effects, toxicologists have been constructing Adverse Outcome Pathways (AOPs). An AOP can be considered as a pragmatic tool to capture and visualize mechanisms underlying different types of toxicity inflicted by any kind of stressor, and describes the interactions between key entities that lead to the adverse outcome on multiple biological levels of organization. The construction or optimization of an AOP is a labor intensive process, which currently depends on the manual search, collection, reviewing and synthesis of available scientific literature. This process could however be largely facilitated using Natural Language Processing (NLP) to extract information contained in scientific literature in a systematic, objective, and rapid manner that would lead to greater accuracy and reproducibility. This would support researchers to invest their expertise in the substantive assessment of the AOPs by replacing the time spent on evidence gathering by a critical review of the data extracted by NLP. As case examples, we selected two frequent adversities observed in the liver: namely, cholestasis and steatosis denoting accumulation of bile and lipid, respectively. We used deep learning language models to recognize entities of interest in text and establish causal relationships between them. We demonstrate how an NLP pipeline combining Named Entity Recognition and a simple rules-based relationship extraction model helps screen compounds related to liver adversities in the literature, but also extract mechanistic information for how such adversities develop, from the molecular to the organismal level. Finally, we provide some perspectives opened by the recent progress in Large Language Models and how these could be used in the future. We propose this work brings two main contributions: 1) a proof-of-concept that NLP can support the extraction of information from text for modern toxicology and 2) a template open-source model for recognition of toxicological entities and extraction of their relationships. All resources are openly accessible via GitHub (https://github.com/ontox-project/en-tox).
DOCUMENT
In recent years, a step change has been seen in the rate of adoption of Industry 4.0 technologies by manufacturers and industrial organizations alike. This article discusses the current state of the art in the adoption of Industry 4.0 technologies within the construction industry. Increasing complexity in onsite construction projects coupled with the need for higher productivity is leading to increased interest in the potential use of Industry 4.0 technologies. This article discusses the relevance of the following key Industry 4.0 technologies to construction: data analytics and artificial intelligence, robotics and automation, building information management, sensors and wearables, digital twin, and industrial connectivity. Industrial connectivity is a key aspect as it ensures that all Industry 4.0 technologies are interconnected allowing the full benefits to be realized. This article also presents a research agenda for the adoption of Industry 4.0 technologies within the construction sector, a three-phase use of intelligent assets from the point of manufacture up to after build, and a four-staged R&D process for the implementation of smart wearables in a digital enhanced construction site.
DOCUMENT
Adverse Outcome Pathways (AOPs) are conceptual frameworks that tie an initial perturbation (molecular initiat- ing event) to a phenotypic toxicological manifestation (adverse outcome), through a series of steps (key events). They provide therefore a standardized way to map and organize toxicological mechanistic information. As such, AOPs inform on key events underlying toxicity, thus supporting the development of New Approach Methodologies (NAMs), which aim to reduce the use of animal testing for toxicology purposes. However, the establishment of a novel AOP relies on the gathering of multiple streams of evidence and infor- mation, from available literature to knowledge databases. Often, this information is in the form of free text, also called unstructured text, which is not immediately digestible by a computer. This information is thus both tedious and increasingly time-consuming to process manually with the growing volume of data available. The advance- ment of machine learning provides alternative solutions to this challenge. To extract and organize information from relevant sources, it seems valuable to employ deep learning Natural Language Processing techniques. We review here some of the recent progress in the NLP field, and show how these techniques have already demonstrated value in the biomedical and toxicology areas. We also propose an approach to efficiently and reliably extract and combine relevant toxicological information from text. This data can be used to map underlying mechanisms that lead to toxicological effects and start building quantitative models, in particular AOPs, ultimately allowing animal-free human-based hazard and risk assessment.
DOCUMENT