Big data analytics received much attention in the last decade and is viewed as one of the next most important strategic resources for organizations. Yet, the role of employees' data literacy seems to be neglected in current literature. The aim of this study is twofold: (1) it develops data literacy as an organization competency by identifying its dimensions and measurement, and (2) it examines the relationship between data literacy and governmental performance (internal and external). Using data from a survey of 120 Dutch governmental agencies, the proposed model was tested using PLS-SEM. The results empirically support the suggested theoretical framework and corresponding measurement instrument. The results partially support the relationship of data literacy with performance as a significant effect of data literacy on internal performance. However, counter-intuitively, this significant effect is not found in relation to external performance.
MULTIFILE
Current methods for energy diagnosis in heating, ventilation and air conditioning (HVAC) systems are not consistent with process and instrumentation diagrams (P&IDs) as used by engineers to design and operate these systems, leading to very limited application of energy performance diagnosis in practice. In a previous paper, a generic reference architecture – hereafter referred to as the 4S3F (four symptoms and three faults) framework – was developed. Because it is closely related to the way HVAC experts diagnose problems in HVAC installations, 4S3F largely overcomes the problem of limited application. The present article addresses the fault diagnosis process using automated fault identification (AFI) based on symptoms detected with a diagnostic Bayesian network (DBN). It demonstrates that possible faults can be extracted from P&IDs at different levels and that P&IDs form the basis for setting up effective DBNs. The process was applied to real sensor data for a whole year. In a case study for a thermal energy plant, control faults were successfully isolated using balance, energy performance and operational state symptoms. Correction of the isolated faults led to annual primary energy savings of 25%. An analysis showed that the values of set probabilities in the DBN model are not outcome-sensitive. Link to the formal publication via its DOI https://doi.org/10.1016/j.enbuild.2020.110289
A considerable amount of literature has been published on Corporate Reputation, Branding and Brand Image. These studies are extensive and focus particularly on questionnaires and statistical analysis. Although extensive research has been carried out, no single study was found which attempted to predict corporate reputation performance based on data collected from media sources. To perform this task, a biLSTM Neural Network extended with attention mechanism was utilized. The advantages of this architecture are that it obtains excellent performance for NLP tasks. The state-of-the-art designed model achieves highly competitive results, F1 scores around 72%, accuracy of 92% and loss around 20%.
The AR in Staged Entertainment project focuses on utilizing immersive technologies to strengthen performances and create resiliency in live events. In this project The Experiencelab at BUas explores this by comparing live as well as pre-recorded events that utilize Augmented Reality technology to provide an added layer to the experience of the user. Experiences will be measured among others through observational measurements using biometrics. This projects runs in the Experience lab of BUas with partners The Effenaar and 4DR Studio and is connected to the networks and goals related to Chronosphere, Digireal and Makerspace. Project is powered by Fieldlab Events (PPS / ClickNL)..
Production processes can be made ‘smarter’ by exploiting the data streams that are generated by the machines that are used in production. In particular these data streams can be mined to build a model of the production process as it was really executed – as opposed to how it was envisioned. This model can subsequently be analyzed and stress-tested to explore possible causes of production prob-lems and to analyze what-if scenarios, without disrupting the production process itself. It has been shown that such models can successfully be used to diagnose possible causes of production problems, including scrap products and machine defects. Ideally, they can even be used to model and analyze production processes that have not been implemented yet, based on data from existing production pro-cesses and techniques from artificial intelligence that can predict how the new process is likely to be-have in practice in terms of data that its machines generate. This is especially important in mass cus-tomization processes, where the process to create each product may be unique, and can only feasibly be tested using model- and data-driven techniques like the one proposed in this project. Against this background, the goal of this project is to develop a method and toolkit for mining, mod-elling and analyzing production processes, using the time series data that is generated by machines, to: (i) analyze the performance of an existing production process; (ii) diagnose causes of production prob-lems; and (iii) certify that a new – not yet implemented – production process leads to high-quality products. The method is developed by researching and combining techniques from the area of Artificial Intelli-gence with techniques from Operations Research. In particular, it uses: process mining to relate time series data to production processes; queueing networks to determine likely paths through the produc-tion processes and detect anomalies that may be the cause of production problems; and generative adversarial networks to generate likely future production scenarios and sample scenarios of production problems for diagnostic purposes. The techniques will be evaluated and adapted in implementations at the partners from industry, using a design science approach. In particular, implementations of the method are made for: explaining production problems; explaining machine defects; and certifying the correct operation of new production processes.
Collaborative networks for sustainability are emerging rapidly to address urgent societal challenges. By bringing together organizations with different knowledge bases, resources and capabilities, collaborative networks enhance information exchange, knowledge sharing and learning opportunities to address these complex problems that cannot be solved by organizations individually. Nowhere is this more apparent than in the apparel sector, where examples of collaborative networks for sustainability are plenty, for example Sustainable Apparel Coalition, Zero Discharge Hazardous Chemicals, and the Fair Wear Foundation. Companies like C&A and H&M but also smaller players join these networks to take their social responsibility. Collaborative networks are unlike traditional forms of organizations; they are loosely structured collectives of different, often competing organizations, with dynamic membership and usually lack legal status. However, they do not emerge or organize on their own; they need network orchestrators who manage the network in terms of activities and participants. But network orchestrators face many challenges. They have to balance the interests of diverse companies and deal with tensions that often arise between them, like sharing their innovative knowledge. Orchestrators also have to “sell” the value of the network to potential new participants, who make decisions about which networks to join based on the benefits they expect to get from participating. Network orchestrators often do not know the best way to maintain engagement, commitment and enthusiasm or how to ensure knowledge and resource sharing, especially when competitors are involved. Furthermore, collaborative networks receive funding from grants or subsidies, creating financial uncertainty about its continuity. Raising financing from the private sector is difficult and network orchestrators compete more and more for resources. When networks dissolve or dysfunction (due to a lack of value creation and capture for participants, a lack of financing or a non-functioning business model), the collective value that has been created and accrued over time may be lost. This is problematic given that industrial transformations towards sustainability take many years and durable organizational forms are required to ensure ongoing support for this change. Network orchestration is a new profession. There are no guidelines, handbooks or good practices for how to perform this role, nor is there professional education or a professional association that represents network orchestrators. This is urgently needed as network orchestrators struggle with their role in governing networks so that they create and capture value for participants and ultimately ensure better network performance and survival. This project aims to foster the professionalization of the network orchestrator role by: (a) generating knowledge, developing and testing collaborative network governance models, facilitation tools and collaborative business modeling tools to enable network orchestrators to improve the performance of collaborative networks in terms of collective value creation (network level) and private value capture (network participant level) (b) organizing platform activities for network orchestrators to exchange ideas, best practices and learn from each other, thereby facilitating the formation of a professional identity, standards and community of network orchestrators.