Productivity in construction is relatively low compared to other industries. This is particularly true for labour productivity. Problems that contribute to low labour productivity are often related to unorganised workspace, and inefficient organisation of work, materials and equipment. In terms of time use, site workers spend time on various activities including installing, waiting, walking etc. In lean production terms time use should be value adding and not wasteful or non-value adding. The study reported in this paper has endeavoured to measure the time use and movement applying an automated data system. The case study reflected a limited application to a specific kind of activity, namely doors installation. The study investigated time use and movements based on interviews and on automated detection of workforce. The interviews gave insights in the time build-up of work and value-added time use per day. The automated tracking indicated time intervals and uninterrupted presence of site workers on work locations giving indications of value adding time. The time measurements of the study enable comparison of time use categories of site workers. The study showed the data system calculated the same amounts of productive and value adding time one would expect based on the organisation and characteristics of the work. However, the discussion of the results underlined that the particular characteristics of individual projects and types of team work organisation may well have an impact on productivity levels of workers. More application and comparative studies of projects and further development and extension of the automated data system should be helpful.
LINK
In practice, faults in building installations are seldom noticed because automated systems to diagnose such faults are not common use, despite many proposed methods: they are cumbersome to apply and not matching the way of thinking of HVAC engineers. Additionally, fault diagnosis and energy performance diagnosis are seldom combined, while energy wastage is mostly a consequence of component, sensors or control faults. In this paper new advances on the 4S3F diagnose framework for automated diagnostic of energy waste in HVAC systems are presented. The architecture of HVAC systems can be derived from a process and instrumentation diagram (P&ID) usually set up by HVAC designers. The paper demonstrates how all possible faults and symptoms can be extracted on a very structured way from the P&ID, and classified in 4 types of symptoms (deviations from balance equations, operational states, energy performances or additional information) and 3 types of faults (component, control and model faults). Symptoms and faults are related to each other through Diagnostic Bayesian Networks (DBNs) which work as an expert system. During operation of the HVAC system the data from the BMS is converted to symptoms, which are fed to the DBN. The DBN analyses the symptoms and determines the probability of faults. Generic indicators are proposed for the 4 types of symptoms. Standard DBN models for common components, controls and models are developed and it is demonstrated how to combine them in order to represent the complete HVAC system. Both the symptom and the fault identification parts are tested on historical BMS data of an ATES system including heat pump, boiler, solar panels, and hydronic systems. The energy savings resulting from fault corrections are estimated and amount 25%. Finally, the 4S3F method is extended to hard and soft sensor faults. Sensors are the core of any FDD system and any control system. Automated diagnostic of sensor faults is therefore essential. By considering hard sensors as components and soft sensors as models, they can be integrated into the 4S3F method.
DOCUMENT
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE
Background: Adverse outcome pathway (AOP) networks are versatile tools in toxicology and risk assessment that capture and visualize mechanisms driving toxicity originating from various data sources. They share a common structure consisting of a set of molecular initiating events and key events, connected by key event relationships, leading to the actual adverse outcome. AOP networks are to be considered living documents that should be frequently updated by feeding in new data. Such iterative optimization exercises are typically done manually, which not only is a time-consuming effort, but also bears the risk of overlooking critical data. The present study introduces a novel approach for AOP network optimization of a previously published AOP network on chemical-induced cholestasis using artificial intelligence to facilitate automated data collection followed by subsequent quantitative confidence assessment of molecular initiating events, key events, and key event relationships. Methods: Artificial intelligence-assisted data collection was performed by means of the free web platform Sysrev. Confidence levels of the tailored Bradford-Hill criteria were quantified for the purpose of weight-of-evidence assessment of the optimized AOP network. Scores were calculated for biological plausibility, empirical evidence, and essentiality, and were integrated into a total key event relationship confidence value. The optimized AOP network was visualized using Cytoscape with the node size representing the incidence of the key event and the edge size indicating the total confidence in the key event relationship. Results: This resulted in the identification of 38 and 135 unique key events and key event relationships, respectively. Transporter changes was the key event with the highest incidence, and formed the most confident key event relationship with the adverse outcome, cholestasis. Other important key events present in the AOP network include: nuclear receptor changes, intracellular bile acid accumulation, bile acid synthesis changes, oxidative stress, inflammation and apoptosis. Conclusions: This process led to the creation of an extensively informative AOP network focused on chemical-induced cholestasis. This optimized AOP network may serve as a mechanistic compass for the development of a battery of in vitro assays to reliably predict chemical-induced cholestatic injury.
DOCUMENT
Current research on data in policy has primarily focused on street-level bureaucrats, neglecting the changes in the work of policy advisors. This research fills this gap by presenting an explorative theoretical understanding of the integration of data, local knowledge and professional expertise in the work of policy advisors. The theoretical perspective we develop builds upon Vickers’s (1995, The Art of Judgment: A Study of Policy Making, Centenary Edition, SAGE) judgments in policymaking. Empirically, we present a case study of a Dutch law enforcement network for preventing and reducing organized crime. Based on interviews, observations, and documents collected in a 13-month ethnographic fieldwork period, we study how policy advisors within this network make their judgments. In contrast with the idea of data as a rationalizing force, our study reveals that how data sources are selected and analyzed for judgments is very much shaped by the existing local and expert knowledge of policy advisors. The weight given to data is highly situational: we found that policy advisors welcome data in scoping the policy issue, but for judgments more closely connected to actual policy interventions, data are given limited value.
LINK
The report from Inholland University is dedicated to the impacts of data-driven practices on non-journalistic media production and creative industries. It explores trends, showcases advancements, and highlights opportunities and threats in this dynamic landscape. Examining various stakeholders' perspectives provides actionable insights for navigating challenges and leveraging opportunities. Through curated showcases and analyses, the report underscores the transformative potential of data-driven work while addressing concerns such as copyright issues and AI's role in replacing human artists. The findings culminate in a comprehensive overview that guides informed decision-making in the creative industry.
MULTIFILE
Pauses in speech may be categorized on the basis of their length. Some authors claim that there are two categories (short and long pauses) (Baken & Orlikoff, 2000), others claim that there are three (Campione & Véronis, 2002), or even more. Pause lengths may be affected in speakers with aphasia. Individuals with dementia probably caused by Alzheimer’s disease (AD) or Parkinson’s disease (PD) interrupt speech longer and more frequently. One infrequent form of dementia, non-fluent primary progressive aphasia (PPA-NF), is even defined as causing speech with an unusual interruption pattern (”hesitant and labored speech”). Although human listeners can often easily distinguish pathological speech from healthy speech, it is unclear yet how software can detect the relevant patterns. The research question in this study is: how can software measure the statistical parameters that characterize the disfluent speech of PPA-NF/AD/PD patients in connected conversational speech?
DOCUMENT
The Heating Ventilation and Air Conditioning (HVAC) sector is responsible for a large part of the total worldwide energy consumption, a significant part of which is caused by incorrect operation of controls and maintenance. HVAC systems are becoming increasingly complex, especially due to multi-commodity energy sources, and as a result, the chance of failures in systems and controls will increase. Therefore, systems that diagnose energy performance are of paramount importance. However, despite much research on Fault Detection and Diagnosis (FDD) methods for HVAC systems, they are rarely applied. One major reason is that proposed methods are different from the approaches taken by HVAC designers who employ process and instrumentation diagrams (P&IDs). This led to the following main research question: Which FDD architecture is suitable for HVAC systems in general to support the set up and implementation of FDD methods, including energy performance diagnosis? First, an energy performance FDD architecture based on information embedded in P&IDs was elaborated. The new FDD method, called the 4S3F method, combines systems theory with data analysis. In the 4S3F method, the detection and diagnosis phases are separated. The symptoms and faults are classified into 4 types of symptoms (deviations from balance equations, operating states (OS) and energy performance (EP), and additional information) and 3 types of faults (component, control and model faults). Second, the 4S3F method has been tested in four case studies. In the first case study, the symptom detection part was tested using historical Building Management System (BMS) data for a whole year: the combined heat and power plant of the THUAS (The Hague University of Applied Sciences) building in Delft, including an aquifer thermal energy storage (ATES) system, a heat pump, a gas boiler and hot and cold water hydronic systems. This case study showed that balance, EP and OS symptoms can be extracted from the P&ID and the presence of symptoms detected. In the second case study, a proof of principle of the fault diagnosis part of the 4S3F method was successfully performed on the same HVAC system extracting possible component and control faults from the P&ID. A Bayesian Network diagnostic, which mimics the way of diagnosis by HVAC engineers, was applied to identify the probability of all possible faults by interpreting the symptoms. The diagnostic Bayesian network (DBN) was set up in accordance with the P&ID, i.e., with the same structure. Energy savings from fault corrections were estimated to be up to 25% of the primary energy consumption, while the HVAC system was initially considered to have an excellent performance. In the third case study, a demand-driven ventilation system (DCV) was analysed. The analysis showed that the 4S3F method works also to identify faults on an air ventilation system.
DOCUMENT
This study presents an automated method for detecting and measuring the apex head thickness of tomato plants, a critical phenotypic trait associated with plant health, fruit development, and yield forecasting. Due to the apex's sensitivity to physical contact, non-invasive monitoring is essential. This paper addresses the demand for automated, contactless systems among Dutch growers. Our approach integrates deep learning models (YOLO and Faster RCNN) with RGB-D camera imaging to enable accurate, scalable, and non-invasive measurement in greenhouse environments. A dataset of 600 RGB-D images captured in a controlled greenhouse, was fully preprocessed, annotated, and augmented for optimal training. Experimental results show that YOLOv8n achieved superior performance with a precision of 91.2 %, recall of 86.7 %, and an Intersection over Union (IoU) score of 89.4 %. Other models, such as YOLOv9t, YOLOv10n, YOLOv11n, and Faster RCNN, demonstrated lower precision scores of 83.6 %, 74.6 %, 75.4 %, and 78 %, respectively. Their IoU scores were also lower, indicating less reliable detection. This research establishes a robust, real-time method for precision agriculture through automated apex head thickness measurement.
DOCUMENT
We present a novel architecture for an AI system that allows a priori knowledge to combine with deep learning. In traditional neural networks, all available data is pooled at the input layer. Our alternative neural network is constructed so that partial representations (invariants) are learned in the intermediate layers, which can then be combined with a priori knowledge or with other predictive analyses of the same data. This leads to smaller training datasets due to more efficient learning. In addition, because this architecture allows inclusion of a priori knowledge and interpretable predictive models, the interpretability of the entire system increases while the data can still be used in a black box neural network. Our system makes use of networks of neurons rather than single neurons to enable the representation of approximations (invariants) of the output.
LINK