Abstract Despite the numerous business benefits of data science, the number of data science models in production is limited. Data science model deployment presents many challenges and many organisations have little model deployment knowledge. This research studied five model deployments in a Dutch government organisation. The study revealed that as a result of model deployment a data science subprocess is added into the target business process, the model itself can be adapted, model maintenance is incorporated in the model development process and a feedback loop is established between the target business process and the model development process. These model deployment effects and the related deployment challenges are different in strategic and operational target business processes. Based on these findings, guidelines are formulated which can form a basis for future principles how to successfully deploy data science models. Organisations can use these guidelines as suggestions to solve their own model deployment challenges.
DOCUMENT
During the past two decades the implementation and adoption of information technology has rapidly increased. As a consequence the way businesses operate has changed dramatically. For example, the amount of data has grown exponentially. Companies are looking for ways to use this data to add value to their business. This has implications for the manner in which (financial) governance needs to be organized. The main purpose of this study is to obtain insight in the changing role of controllers in order to add value to the business by means of data analytics. To answer the research question a literature study was performed to establish a theoretical foundation concerning data analytics and its potential use. Second, nineteen interviews were conducted with controllers, data scientists and academics in the financial domain. Thirdly, a focus group with experts was organized in which additional data were gathered. Based on the literature study and the participants responses it is clear that the challenge of the data explosion consist of converting data into information, knowledge and meaningful insights to support decision-making processes. Performing data analyses enables the controller to support rational decision making to complement the intuitive decision making by (senior) management. In this way, the controller has the opportunity to be in the lead of the information provision within an organization. However, controllers need to have more advanced data science and statistic competences to be able to provide management with effective analysis. Specifically, we found that an important skill regarding statistics is the visualization and communication of statistical analysis. This is needed for controllers in order to grow in their role as business partner..
DOCUMENT
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE
Data mining seems to be a promising way to tackle the problem of unpredictability in MRO organizations. The Amsterdam University of Applied Sciences therefore cooperated with the aviation industry for a two-year applied research project exploring the possibilities of data mining in this area. Researchers studied more than 25 cases at eight different MRO enterprises, applying a CRISP-DM methodology as a structural guideline throughout the project. They explored, prepared and combined MRO data, flight data and external data, and used statistical and machine learning methods to visualize, analyse and predict maintenance. They also used the individual case studies to make predictions about the duration and costs of planned maintenance tasks, turnaround time and useful life of parts. Challenges presented by the case studies included time-consuming data preparation, access restrictions to external data-sources and the still-limited data science skills in companies. Recommendations were made in terms of ways to implement data mining – and ways to overcome the related challenges – in MRO. Overall, the research project has delivered promising proofs of concept and pilot implementations
MULTIFILE
De druk op de langdurige ouderenzorg neemt toe. Mede door de forse vergrijzing, het tekort aan personeel én de voortdurende implementatie van innovaties, ligt er een enorme uitdaging om de langdurige ouderenzorg toekomstbestendig te maken. Digitale en technologische ontwikkelingen spelen hierbij een relevante rol, mede omdat ze tot enorme hoeveelheden data leiden die worden verzameld. Al deze data, die veelal worden verzameld tijdens dagelijkse werkzaamheden, kunnen worden benut met het oog op het inzichtelijk maken en verbeteren van kwaliteit van zorg, kwaliteit van leven en kwaliteit van werken in de langdurige zorg. Mede door de inzet van vooruitstrevende analysemethoden, zijn er nieuwe mogelijkheden om kennis, en daarmee waarde, uit data te halen. Maar welke mogelijkheden bieden data eigenlijk en hoe zetten we dergelijke data om in kennis?
DOCUMENT
Current research on data in policy has primarily focused on street-level bureaucrats, neglecting the changes in the work of policy advisors. This research fills this gap by presenting an explorative theoretical understanding of the integration of data, local knowledge and professional expertise in the work of policy advisors. The theoretical perspective we develop builds upon Vickers’s (1995, The Art of Judgment: A Study of Policy Making, Centenary Edition, SAGE) judgments in policymaking. Empirically, we present a case study of a Dutch law enforcement network for preventing and reducing organized crime. Based on interviews, observations, and documents collected in a 13-month ethnographic fieldwork period, we study how policy advisors within this network make their judgments. In contrast with the idea of data as a rationalizing force, our study reveals that how data sources are selected and analyzed for judgments is very much shaped by the existing local and expert knowledge of policy advisors. The weight given to data is highly situational: we found that policy advisors welcome data in scoping the policy issue, but for judgments more closely connected to actual policy interventions, data are given limited value.
LINK
In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research.
DOCUMENT
Data analytics seems a promising approach to address the problem of unpredictability in MRO organizations. The Amsterdam University of Applied Sciences in cooperation with the aviation industry has initiated a two-year applied research project to explore the possibilities of data mining. More than 25 cases have been studied at eight different MRO enterprises. The CRISP-DM methodology is applied to have a structural guideline throughout the project. The data within MROs were explored and prepared. Individual case studies conducted with statistical and machine learning methods, were successfully to predict among others, the duration of planned maintenance tasks as well as the optimal maintenance intervals, the probability of the occurrence of findings during maintenance tasks.
DOCUMENT
Process Mining can roughly be defined as a data-driven approach to process management. The basic idea of process mining is to automatically distill and to visualize business processes using event logs from company IT-systems (e.g. ERP, WMS, CRM etc.) to identify specific areas for improvement at an operational level. An event log can be described as a database entry that signifies a specific action in a software application at a specific time. Simple examples of these actions are customer order entries, scanning an item in a warehouse, and registration of a patient for a hospital check-up.Process mining has gained popularity in the logistics domain in recent years because of three main reasons. Firstly, the logistics IT-systems' large and exponentially growing amounts of event data are being stored and provide detailed information on the history of logistics processes. Secondly, to outperform competitors, most organizations are searching for (new) ways to improve their logistics processes such as reducing costs and lead time. Thirdly, since the 1970s, the power of computers has grown at an astonishing rate. As such, the use of advance algorithms for business purposes, which requires a certain amount of computational power, have become more accessible.Before diving into Process Mining, this course will first discuss some basic concepts, theories, and methods regarding the visualization and improvement of business processes.
MULTIFILE
Continuous monitoring, continuous auditing and continuous assurance are three methods that utilize a high degree of business intelligence and analytics. The increased interest in the three methods has led to multiple studies that analyze each method or a combination of methods from a micro-level. However, limited studies have focused on the perceived usage scenarios of the three methods from a macro level through the eyes of the end-user. In this study, we bridge the gap by identifying the different usage scenarios for each of the methods according to the end-users, the accountants. Data has been collected through a survey, which is analyzed by applying a nominal analysis and a process mining algorithm. Results show that respondents indicated 13 unique usage scenarios, while not one of the three methods is included in all of the 13 scenarios, which illustrates the diversity of opinions in accountancy practice in the Netherlands.
DOCUMENT