Although governments are investing heavily in big data analytics, reports show mixed results in terms of performance. Whilst big data analytics capability provided a valuable lens in business and seems useful for the public sector, there is little knowledge of its relationship with governmental performance. This study aims to explain how big data analytics capability led to governmental performance. Using a survey research methodology, an integrated conceptual model is proposed highlighting a comprehensive set of big data analytics resources influencing governmental performance. The conceptual model was developed based on prior literature. Using a PLS-SEM approach, the results strongly support the posited hypotheses. Big data analytics capability has a strong impact on governmental efficiency, effectiveness, and fairness. The findings of this paper confirmed the imperative role of big data analytics capability in governmental performance in the public sector, which earlier studies found in the private sector. This study also validated measures of governmental performance.
MULTIFILE
This paper provides a management perspective of organisational factors that contributes to the reduction of food waste through the application of design science principles to explore causal relationships between food distribution (organisational) and consumption (societal) factors. Qualitative data were collected with an organisational perspective from commercial food consumers along with large-scale food importers, distributors, and retailers. Cause-effect models are built and “what-if” simulations are conducted through the development and application of a Fuzzy Cognitive Map (FCM) approaches to elucidate dynamic interrelationships. The simulation models developed provide a practical insight into existing and emergent food losses scenarios, suggesting the need for big data sets to allow for generalizable findings to be extrapolated from a more detailed quantitative exercise. This research offers itself as evidence to support policy makers in the development of policies that facilitate interventions to reduce food losses. It also contributes to the literature through sustaining, impacting and potentially improving levels of food security, underpinned by empirically constructed policy models that identify potential behavioural changes. It is the extension of these simulation models set against a backdrop of a proposed big data framework for food security, where this study sets avenues for future research for others to design and construct big data research in food supply chains. This research has therefore sought to provide policymakers with a means to evaluate new and existing policies, whilst also offering a practical basis through which food chains can be made more resilient through the consideration of management practices and policy decisions.
LINK
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE
The scientific publishing industry is rapidly transitioning towards information analytics. This shift is disproportionately benefiting large companies. These can afford to deploy digital technologies like knowledge graphs that can index their contents and create advanced search engines. Small and medium publishing enterprises, instead, often lack the resources to fully embrace such digital transformations. This divide is acutely felt in the arts, humanities and social sciences. Scholars from these disciplines are largely unable to benefit from modern scientific search engines, because their publishing ecosystem is made of many specialized businesses which cannot, individually, develop comparable services. We propose to start bridging this gap by democratizing access to knowledge graphs – the technology underpinning modern scientific search engines – for small and medium publishers in the arts, humanities and social sciences. Their contents, largely made of books, already contain rich, structured information – such as references and indexes – which can be automatically mined and interlinked. We plan to develop a framework for extracting structured information and create knowledge graphs from it. We will as much as possible consolidate existing proven technologies into a single codebase, instead of reinventing the wheel. Our consortium is a collaboration of researchers in scientific information mining, Odoma, an AI consulting company, and the publisher Brill, sharing its data and expertise. Brill will be able to immediately put to use the project results to improve its internal processes and services. Furthermore, our results will be published in open source with a commercial-friendly license, in order to foster the adoption and future development of the framework by other publishers. Ultimately, our proposal is an example of industry innovation where, instead of scaling-up, we scale wide by creating a common resource which many small players can then use and expand upon.
ILIAD builds on the assets resulting from two decades of investments in policies and infrastructures for the blue economy and aims at establishing an interoperable, data-intensive, and cost-effective Digital Twin of the Ocean (DTO). It capitalizes on the explosion of new data provided by many different earth sources, advanced computing infrastructures (cloud computing, HPC, Internet of Things, Big Data, social networking, and more) in an inclusive, virtual/augmented, and engaging fashion to address all Earth Data challenges. It will contribute towards a sustainable ocean economy as defined by the Centre for the Fourth Industrial Revolution and the Ocean, a hub for global, multi-stakeholder co-operation.
In the past decade, particularly smaller drones have started to claim their share of the sky due to their potential applications in the civil sector as flying-eyes, noses, and very recently as flying hands. Network partners from various application domains: safety, Agro, Energy & logistic are curious about the next leap in this field, namely, collaborative Sky-workers. Their main practical question is essentially: “Can multiple small drones transport a large object over a high altitude together in outdoor applications?” The industrial partners, together with Saxion and RUG, will conduct feasibility study to investigate if it is possible to develop these collaborative Sky-workers and to identify which possibilities this new technology will offer. Design science research methodology, which focuses on solution-oriented applied research involving multiple iterations with rigorous evaluations, will be used to research the feasibility of the main technological building blocks. They are: • Accurate localization based on onboard sensors. • Safe and optimal interaction controller for collaborative aerial transport Within this project, the first proof-of-concepts will be developed. The results of this project will be used to expand the existing network and formulate a bigger project to address additional critical aspects in order to develop a complete framework for collaborative drones.