During the past two decades the implementation and adoption of information technology has rapidly increased. As a consequence the way businesses operate has changed dramatically. For example, the amount of data has grown exponentially. Companies are looking for ways to use this data to add value to their business. This has implications for the manner in which (financial) governance needs to be organized. The main purpose of this study is to obtain insight in the changing role of controllers in order to add value to the business by means of data analytics. To answer the research question a literature study was performed to establish a theoretical foundation concerning data analytics and its potential use. Second, nineteen interviews were conducted with controllers, data scientists and academics in the financial domain. Thirdly, a focus group with experts was organized in which additional data were gathered. Based on the literature study and the participants responses it is clear that the challenge of the data explosion consist of converting data into information, knowledge and meaningful insights to support decision-making processes. Performing data analyses enables the controller to support rational decision making to complement the intuitive decision making by (senior) management. In this way, the controller has the opportunity to be in the lead of the information provision within an organization. However, controllers need to have more advanced data science and statistic competences to be able to provide management with effective analysis. Specifically, we found that an important skill regarding statistics is the visualization and communication of statistical analysis. This is needed for controllers in order to grow in their role as business partner..
DOCUMENT
In the course of our supervisory work over the years, we have noticed that qualitative research tends to evoke a lot of questions and worries, so-called frequently asked questions (FAQs). This series of four articles intends to provide novice researchers with practical guidance for conducting high-quality qualitative research in primary care. By ‘novice’ we mean Master’s students and junior researchers, as well as experienced quantitative researchers who are engaging in qualitative research for the first time. This series addresses their questions and provides researchers, readers, reviewers and editors with references to criteria and tools for judging the quality of qualitative research papers. The second article focused on context, research questions and designs, and referred to publications for further reading. This third article addresses FAQs about sampling, data collection and analysis. The data collection plan needs to be broadly defined and open at first, and become flexible during data collection. Sampling strategies should be chosen in such a way that they yield rich information and are consistent with the methodological approach used. Data saturation determines sample size and will be different for each study. The most commonly used data collection methods are participant observation, face-to-face in-depth interviews and focus group discussions. Analyses in ethnographic, phenomenological, grounded theory, and content analysis studies yield different narrative findings: a detailed description of a culture, the essence of the lived experience, a theory, and a descriptive summary, respectively. The fourth and final article will focus on trustworthiness and publishing qualitative research.
DOCUMENT
The current set of research methods on ictresearchmethods.nl contains only one research method that refers to machine learning: the “Data analytics” method in the “Lab” strategy. This does not reflect the way of working in ML projects, where Data Analytics is not a method to answer one question but the main goal of the project. For ML projects, the Data Analytics method should be divided in several smaller steps, each becoming a method of its own. In other words, we should treat the Data Analytics (or more appropriate ML engineering) process in the same way the software engineering process is treated in the framework. In the remainder of this post I will briefly discuss each of the existing research methods and how they apply to ML projects. The methods are organized by strategy. In the discussion I will give pointers to relevant tools or literature for ML projects.
LINK
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE
Abstract Despite the numerous business benefits of data science, the number of data science models in production is limited. Data science model deployment presents many challenges and many organisations have little model deployment knowledge. This research studied five model deployments in a Dutch government organisation. The study revealed that as a result of model deployment a data science subprocess is added into the target business process, the model itself can be adapted, model maintenance is incorporated in the model development process and a feedback loop is established between the target business process and the model development process. These model deployment effects and the related deployment challenges are different in strategic and operational target business processes. Based on these findings, guidelines are formulated which can form a basis for future principles how to successfully deploy data science models. Organisations can use these guidelines as suggestions to solve their own model deployment challenges.
DOCUMENT
Exploratory analyses are an important first step in psychological research, particularly in problem-based research where various variables are often included from multiple theoretical perspectives not studied together in combination before. Notably, exploratory analyses aim to give first insights into how items and variables included in a study relate to each other. Typically, exploratory analyses involve computing bivariate correlations between items and variables and presenting them in a table. While this is suitable for relatively small data sets, such tables can easily become overwhelming when datasets contain a broad set of variables from multiple theories. We propose the Gaussian graphical model as a novel exploratory analyses tool and present a systematic roadmap to apply this model to explore relationships between items and variables in environmental psychology research. We demonstrate the use and value of the Gaussian graphical model to study relationships between a broad set of items and variables that are expected to explain the effectiveness of community energy initiatives in promoting sustainable energy behaviors.
LINK
Terms like ‘big data’, ‘data science’, and ‘data visualisation’ have become buzzwords in recent years and are increasingly intertwined with journalism. Data visualisation may further blur the lines between science communication and graphic design. Our study is situated in these overlaps to compare the design of data visualisations in science news stories across four online news media platforms in South Africa and the United States. Our study contributes to an understanding of how well-considered data visualisations are tools for effective storytelling, and offers practical recommendations for using data visualisation in science communication efforts.
LINK
Although governments are investing heavily in big data analytics, reports show mixed results in terms of performance. Whilst big data analytics capability provided a valuable lens in business and seems useful for the public sector, there is little knowledge of its relationship with governmental performance. This study aims to explain how big data analytics capability led to governmental performance. Using a survey research methodology, an integrated conceptual model is proposed highlighting a comprehensive set of big data analytics resources influencing governmental performance. The conceptual model was developed based on prior literature. Using a PLS-SEM approach, the results strongly support the posited hypotheses. Big data analytics capability has a strong impact on governmental efficiency, effectiveness, and fairness. The findings of this paper confirmed the imperative role of big data analytics capability in governmental performance in the public sector, which earlier studies found in the private sector. This study also validated measures of governmental performance.
MULTIFILE
As every new generation of civil aircraft creates more on-wing data and fleets gradually become more connected with the ground, an increased number of opportunities can be identified for more effective Maintenance, Repair and Overhaul (MRO) operations. Data are becoming a valuable asset for aircraft operators. Sensors measure and record thousands of parameters in increased sampling rates. However, data do not serve any purpose per se. It is the analysis that unleashes their value. Data analytics methods can be simple, making use of visualizations, or more complex, with the use of sophisticated statistics and Artificial Intelligence algorithms. Every problem needs to be approached with the most suitable and less complex method. In MRO operations, two major categories of on-wing data analytics problems can be identified. The first one requires the identification of patterns, which enable the classification and optimization of different maintenance and overhaul processes. The second category of problems requires the identification of rare events, such as the unexpected failure of parts. This cluster of problems relies on the detection of meaningful outliers in large data sets. Different Machine Learning methods can be suggested here, such as Isolation Forest and Logistic Regression. In general, the use of data analytics for maintenance or failure prediction is a scientific field with a great potentiality. Due to its complex nature, the opportunities for aviation Data Analytics in MRO operations are numerous. As MRO services focus increasingly in long term contracts, maintenance organizations with the right forecasting methods will have an advantage. Data accessibility and data quality are two key-factors. At the same time, numerous technical developments related to data transfer and data processing can be promising for the future.
DOCUMENT
Conducting research as part of a PhD study offers students a unique opportunity to explore new methods and methodologies. Although we each based our PhD studies on a more traditional participatory action research (PAR) methodology, we also took the opportunity to experiment with a new data analysis method. Working from a critical social science paradigm (Fay, 1987) that translates into critical and collaborative research practice with an emancipatory intent, our scope of freedom as to how to process data, perform the analyses, then synthesise and report the results, became restricted. We felt that if we were to be genuine in involving practitioners in data analysis, as co-researchers, we needed to adopt approaches that allowed the expression of all ways of knowing. Using the creative arts proved to be an innovative way of working and learning, facilitating the complex interpretation of narrative data, identifying patterns, themes and connections. As in all qualitative research, in order to enhance process and outcome rigour, the (learning) strategies and methods used by researchers should be congruent with the principles characteristic of the chosen methodology. In this chapter, we want to offer you, the reader, a deeper insight into the key principles underlying this method for data analysis, before describing how we "danced" with them in each of our studies. Building on the original work of Boomer and McCormack (2010), who used the key principles of practice development, namely participation, inclusion and collaboration,i we developed a "critical and creative data analysis framework". This framework rests on the three main philosophical principles of hermeneutics,criticality, and creativity. Applying these principles to data analysis we have learned that multiple perspectives usually show more similarities than differences, which we express visually and poetically in Figure 22.1. The interface between two perspectives is not a juxtaposition but a fluid transition, where the sky meets the sea and the sea meets the sand. Each is separate and yet part of the whole, bigger picture.
DOCUMENT