The value of a decision can be increased through analyzing the decision logic, and the outcomes. The more often a decision is taken, the more data becomes available about the results. More available data results into smarter decisions and increases the value the decision has for an organization. The research field addressing this problem is Decision mining. By conducting a literature study on the current state of Decision mining, we aim to discover the research gaps and where Decision mining can be improved upon. Our findings show that the concepts used in the Decision mining field and related fields are ambiguous and show overlap. Future research directions are discovered to increase the quality and maturity of Decision mining research. This could be achieved by focusing more on Decision mining research, a change is needed from a business process Decision mining approach to a decision focused approach.
DOCUMENT
This study analyses the interactions of students with the recorded lectures. We report on an analysis of students' use of recorded lectures at two Universities in the Netherlands. The data logged by the lecture capture system (LCS) is used and combined with collected survey data. We describe the process of data pre-processing and analysis of the resulting full dataset and then focus on the usage for the course with the most learner sessions. We found discrepancies as well as similarities between students' verbal reports and actual usage as logged by the recorded lecture servers. The analysis shows that recorded lectures are viewed to prepare for exams and assignments. The data suggests that students who do this have a significantly higher chance of passing the exams. Given the discrepancies between verbal reports and actual usage, research should no longer rely on verbal reports alone.
DOCUMENT
This method paper presents a template solution for text mining of scientific literature using the R tm package. Literature to be analyzed can be collected manually or automatically using the code provided with this paper. Once the literature is collected, the three steps for conducting text mining can be performed as outlined below:• loading and cleaning of text from articles,• processing, statistical analysis, and clustering, and• presentation of results using generalized and tailor-made visualizations.The text mining steps can be applied to a single, multiple, or time series groups of documents.References are provided to three published peer reviewed articles that use the presented text mining methodology. The main advantages of our method are: (1) Its suitability for both research and educational purposes, (2) Compliance with the Findable Accessible Interoperable and Reproducible (FAIR) principles, and (3) code and example data are made available on GitHub under the open-source Apache V2 license.
DOCUMENT
Abstract Despite the numerous business benefits of data science, the number of data science models in production is limited. Data science model deployment presents many challenges and many organisations have little model deployment knowledge. This research studied five model deployments in a Dutch government organisation. The study revealed that as a result of model deployment a data science subprocess is added into the target business process, the model itself can be adapted, model maintenance is incorporated in the model development process and a feedback loop is established between the target business process and the model development process. These model deployment effects and the related deployment challenges are different in strategic and operational target business processes. Based on these findings, guidelines are formulated which can form a basis for future principles how to successfully deploy data science models. Organisations can use these guidelines as suggestions to solve their own model deployment challenges.
DOCUMENT
During the past two decades the implementation and adoption of information technology has rapidly increased. As a consequence the way businesses operate has changed dramatically. For example, the amount of data has grown exponentially. Companies are looking for ways to use this data to add value to their business. This has implications for the manner in which (financial) governance needs to be organized. The main purpose of this study is to obtain insight in the changing role of controllers in order to add value to the business by means of data analytics. To answer the research question a literature study was performed to establish a theoretical foundation concerning data analytics and its potential use. Second, nineteen interviews were conducted with controllers, data scientists and academics in the financial domain. Thirdly, a focus group with experts was organized in which additional data were gathered. Based on the literature study and the participants responses it is clear that the challenge of the data explosion consist of converting data into information, knowledge and meaningful insights to support decision-making processes. Performing data analyses enables the controller to support rational decision making to complement the intuitive decision making by (senior) management. In this way, the controller has the opportunity to be in the lead of the information provision within an organization. However, controllers need to have more advanced data science and statistic competences to be able to provide management with effective analysis. Specifically, we found that an important skill regarding statistics is the visualization and communication of statistical analysis. This is needed for controllers in order to grow in their role as business partner..
DOCUMENT
In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research.
DOCUMENT
Current research on data in policy has primarily focused on street-level bureaucrats, neglecting the changes in the work of policy advisors. This research fills this gap by presenting an explorative theoretical understanding of the integration of data, local knowledge and professional expertise in the work of policy advisors. The theoretical perspective we develop builds upon Vickers’s (1995, The Art of Judgment: A Study of Policy Making, Centenary Edition, SAGE) judgments in policymaking. Empirically, we present a case study of a Dutch law enforcement network for preventing and reducing organized crime. Based on interviews, observations, and documents collected in a 13-month ethnographic fieldwork period, we study how policy advisors within this network make their judgments. In contrast with the idea of data as a rationalizing force, our study reveals that how data sources are selected and analyzed for judgments is very much shaped by the existing local and expert knowledge of policy advisors. The weight given to data is highly situational: we found that policy advisors welcome data in scoping the policy issue, but for judgments more closely connected to actual policy interventions, data are given limited value.
LINK
Big data analytics received much attention in the last decade and is viewed as one of the next most important strategic resources for organizations. Yet, the role of employees' data literacy seems to be neglected in current literature. The aim of this study is twofold: (1) it develops data literacy as an organization competency by identifying its dimensions and measurement, and (2) it examines the relationship between data literacy and governmental performance (internal and external). Using data from a survey of 120 Dutch governmental agencies, the proposed model was tested using PLS-SEM. The results empirically support the suggested theoretical framework and corresponding measurement instrument. The results partially support the relationship of data literacy with performance as a significant effect of data literacy on internal performance. However, counter-intuitively, this significant effect is not found in relation to external performance.
MULTIFILE
Recorded lectures provide an integral recording of live lectures, enabling students to review those lecture at their own pace and whenever they want. Most research into the use of recorded lectures by students has been done by using surveys or interviews. Our research combines this data with data logged by the recording system. We will present the two data collections and cover areas where the data can be triangulated to increase the credibility of the results or to question the student responses. The results of the triangulation show its value, in that it identifies discrepancies in the students' responses in particular where it concerns their perceptions of the amount of use of the recorded lectures. It also shows that we lack data for a number of other areas. We will still need surveys and interviews to get a complete picture.
DOCUMENT
Despite the promises of learning analytics and the existence of several learning analytics implementation frameworks, the large-scale adoption of learning analytics within higher educational institutions remains low. Extant frameworks either focus on a specific element of learning analytics implementation, for example, policy or privacy, or lack operationalization of the organizational capabilities necessary for successful deployment. Therefore, this literature review addresses the research question “What capabilities for the successful adoption of learning analytics can be identified in existing literature on big data analytics, business analytics, and learning analytics?” Our research is grounded in resource-based view theory and we extend the scope beyond the field of learning analytics and include capability frameworks for the more mature research fields of big data analytics and business analytics. This paper’s contribution is twofold: 1) it provides a literature review on known capabilities for big data analytics, business analytics, and learning analytics and 2) it introduces a capability model to support the implementation and uptake of learning analytics. During our study, we identified and analyzed 15 key studies. By synthesizing the results, we found 34 organizational capabilities important to the adoption of analytical activities within an institution and provide 461 ways to operationalize these capabilities. Five categories of capabilities can be distinguished – Data, Management, People, Technology, and Privacy & Ethics. Capabilities presently absent from existing learning analytics frameworks concern sourcing and integration, market, knowledge, training, automation, and connectivity. Based on the results of the review, we present the Learning Analytics Capability Model: a model that provides senior management and policymakers with concrete operationalizations to build the necessary capabilities for successful learning analytics adoption.
MULTIFILE