Despite the promises of learning analytics and the existence of several learning analytics implementation frameworks, the large-scale adoption of learning analytics within higher educational institutions remains low. Extant frameworks either focus on a specific element of learning analytics implementation, for example, policy or privacy, or lack operationalization of the organizational capabilities necessary for successful deployment. Therefore, this literature review addresses the research question “What capabilities for the successful adoption of learning analytics can be identified in existing literature on big data analytics, business analytics, and learning analytics?” Our research is grounded in resource-based view theory and we extend the scope beyond the field of learning analytics and include capability frameworks for the more mature research fields of big data analytics and business analytics. This paper’s contribution is twofold: 1) it provides a literature review on known capabilities for big data analytics, business analytics, and learning analytics and 2) it introduces a capability model to support the implementation and uptake of learning analytics. During our study, we identified and analyzed 15 key studies. By synthesizing the results, we found 34 organizational capabilities important to the adoption of analytical activities within an institution and provide 461 ways to operationalize these capabilities. Five categories of capabilities can be distinguished – Data, Management, People, Technology, and Privacy & Ethics. Capabilities presently absent from existing learning analytics frameworks concern sourcing and integration, market, knowledge, training, automation, and connectivity. Based on the results of the review, we present the Learning Analytics Capability Model: a model that provides senior management and policymakers with concrete operationalizations to build the necessary capabilities for successful learning analytics adoption.
MULTIFILE
Although governments are investing heavily in big data analytics, reports show mixed results in terms of performance. Whilst big data analytics capability provided a valuable lens in business and seems useful for the public sector, there is little knowledge of its relationship with governmental performance. This study aims to explain how big data analytics capability led to governmental performance. Using a survey research methodology, an integrated conceptual model is proposed highlighting a comprehensive set of big data analytics resources influencing governmental performance. The conceptual model was developed based on prior literature. Using a PLS-SEM approach, the results strongly support the posited hypotheses. Big data analytics capability has a strong impact on governmental efficiency, effectiveness, and fairness. The findings of this paper confirmed the imperative role of big data analytics capability in governmental performance in the public sector, which earlier studies found in the private sector. This study also validated measures of governmental performance.
MULTIFILE
Big data heeft niet alleen geleid tot uitdagende technische vraagstukken, ook gaat het gepaard met allerlei nieuwe ethische en morele kwesties. Om verantwoord met big data om te gaan, moet ook over deze kwesties worden nagedacht. Want slecht datagebruik kan nadelige gevolgen hebben voor grote groepen mensen en voor organisaties. In de slotaflevering van deze serie verkennen Klaas Jan Mollema en Niek van Antwerpen op een pragmatische manier de ethische kant van big data, zonder te blijven steken in de negatieve effecten ervan.
DOCUMENT
Inhalation therapy is essential for the management of respiratory conditions such as asthma and chronic obstructive pulmonary disease. However, current inhalation systems face limitations, including polydisperse aerosols that reduce drug delivery efficiency and complex treatment regimens that affect patient adherence. To improve drug targeting and efficacy, Gilbert Innovation B.V. is developing a next-generation soft-mist inhaler based on electrohydrodynamic atomization (EHDA), which produces uniform micrometer sized droplets. Effective drug delivery requires high flow rates and precise aerosol discharge to ensure deep lung deposition while minimizing losses to the device and oropharynx. To achieve this, the device employs a multi-nozzle system for increased flow and corona discharge needles for charge neutralization. However, ensuring uniform neutralization across multiple nozzles and maintaining stable electrospray operation remain key challenges. COSMIC aims to increase system robustness by optimizing neutralization efficiency, refining material selection, and controlling electrospray stability under varying conditions. The electrospray control system will incorporate advanced strategies leveraging computer vision, machine learning and big data analytics. These innovations will increase efficiency, accessibility and patient comfort in inhalation therapy.
The scientific publishing industry is rapidly transitioning towards information analytics. This shift is disproportionately benefiting large companies. These can afford to deploy digital technologies like knowledge graphs that can index their contents and create advanced search engines. Small and medium publishing enterprises, instead, often lack the resources to fully embrace such digital transformations. This divide is acutely felt in the arts, humanities and social sciences. Scholars from these disciplines are largely unable to benefit from modern scientific search engines, because their publishing ecosystem is made of many specialized businesses which cannot, individually, develop comparable services. We propose to start bridging this gap by democratizing access to knowledge graphs – the technology underpinning modern scientific search engines – for small and medium publishers in the arts, humanities and social sciences. Their contents, largely made of books, already contain rich, structured information – such as references and indexes – which can be automatically mined and interlinked. We plan to develop a framework for extracting structured information and create knowledge graphs from it. We will as much as possible consolidate existing proven technologies into a single codebase, instead of reinventing the wheel. Our consortium is a collaboration of researchers in scientific information mining, Odoma, an AI consulting company, and the publisher Brill, sharing its data and expertise. Brill will be able to immediately put to use the project results to improve its internal processes and services. Furthermore, our results will be published in open source with a commercial-friendly license, in order to foster the adoption and future development of the framework by other publishers. Ultimately, our proposal is an example of industry innovation where, instead of scaling-up, we scale wide by creating a common resource which many small players can then use and expand upon.
Digitalisation has enabled businesses to access and utilise vast amounts of data. Business data analytics allows companies to employ the most recent and relevant data to comprehend situations and enhance decision-making. While the value of data itself is limited, substantial value can be directly or indirectly uncovered from data. This process is referred to as data monetisation. The most successful stories of data monetisation often originate from large corporations, as they have adequate resources to monetise their data. Notably, many such cases arise from prominent Big Tech companies in North America. In contrast, small and medium-sized enterprises (SMEs) have lagged behind in utilising their digital data assets effectively. They are frequently constrained by limited resources to build up capabilities and fully exploit their data. This places them at a strategic disadvantage, particularly as digitalisation is progressively reshaping markets and competitive relationships. Furthermore, the use of digital technologies and data are important in addressing societal challenges such as energy conservation, circularity, and the ageing of the population. This lag has been highlighted by SMEs we have engaged with, where managing directors have indicated their desire to operate based on data, but their companies lack the know-how and are unsure of ‘where to start’. Together with eight SMEs and other partners, we have defined a research project to gain insight into the potential and obstacles of data monetisation in SMEs. More specifically, we will explore how SMEs can transform data into strategic assets and create value. We attempt to demonstrate the journey of data monetisation and illustrate different possibilities to create value from data in SMEs. We will take a holistic approach to examine different aspects of data monetisation and their associations. The outcomes of this project are both practical and academic, such as an SME handbook, academic papers, and case studies.