The development of the World Wide Web, the emergence of social media and Big Data have led to a rising amount of data. Information and Communication Technologies (ICTs) affect the environment in various ways. Their energyconsumption is growing exponentially, with and without the use of ‘green’ energy. Increasing environmental awareness has led to discussions on sustainable development. The data deluge makes it not only necessary to pay attention to the hard- and software dimensions of ICTs but also to the ‘value’ of the data stored. In this paper, we study the possibility to methodically reduce the amount of stored data and records in organizations based on the ‘value’ of information, using the Green Archiving Model we have developed. Reducing the amount of data and records in organizations helps in allowing organizations to fight the data deluge and to realize the objectives of both Digital Archiving and Green IT. At the same time, methodically deleting data and records should reduce the consumption of electricity for data storage. As a consequence, the organizational cost for electricity use should be reduced. Our research showed that the model can be used to reduce [1] the amount of data (45 percent, using Archival Retention Levels and Retention Schedules) and [2] the electricity consumption for data storage (resulting in a cost reduction of 35 percent). Our research indicates that the Green Archiving Model is a viable model to reduce the amount of stored data and records and to curb electricity use for storage in organizations. This paper is the result of the first stage of a research project that is aimed at developing low power ICTs that will automatically appraise, select, preserve or permanently delete data based on their ‘value’. Such an ICT will automatically reduce storage capacity and reduce electricity consumption used for data storage. At the same time, data disposal will reduce overload caused by storing the same data in different formats, it will lower costs and it reduces the potential forliability.
Insight is given in the data sharing environment that AUAS has provided for charging infrastructure; and effect of privacy legislation are evaluated.
Data mining seems to be a promising way to tackle the problem of unpredictability in MRO organizations. The Amsterdam University of Applied Sciences therefore cooperated with the aviation industry for a two-year applied research project exploring the possibilities of data mining in this area. Researchers studied more than 25 cases at eight different MRO enterprises, applying a CRISP-DM methodology as a structural guideline throughout the project. They explored, prepared and combined MRO data, flight data and external data, and used statistical and machine learning methods to visualize, analyse and predict maintenance. They also used the individual case studies to make predictions about the duration and costs of planned maintenance tasks, turnaround time and useful life of parts. Challenges presented by the case studies included time-consuming data preparation, access restrictions to external data-sources and the still-limited data science skills in companies. Recommendations were made in terms of ways to implement data mining – and ways to overcome the related challenges – in MRO. Overall, the research project has delivered promising proofs of concept and pilot implementations
MULTIFILE
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.
Het analyseren van grote gegevensbestanden om de kwaliteit van het onderwijs te verbeteren is een hot item. De toepassing van learning analytics kan het onderwijs verbeteren. Wij doen onderzoek naar learning analytics en de vaardigheden die gebruikers daarbij nodig hebben.Doel Wij onderzoeken wat de gevolgen zijn van databewerking op de uitkomsten van learning analytics. En welke vaardigheden hebben gebruikers nodig om deze systemen zinvol te gebruiken? Learning analytics Learning analytics is het meten, verzamelen, analyseren en rapporteren van data van studenten en hun omgeving om het leren en de leeromgeving te begrijpen en te verbeteren. Het gebruik van learning analyticssystemen Het realiseren van grote delen van de onderwijsvisie van Hogeschool Utrecht is sterk verbonden met de succesvolle uitvoering van analyses op studentniveau. Het gebruik van learning analyticssystemen is niet vanzelfsprekend. De ontwerpers en ontwikkelaars van deze systemen moeten helder zijn over hun ontwerpkeuzes (zoals manieren van databewerking en de werking van algoritmes). Anderzijds moeten studenten en docenten beschikken over datavaardigheden om deze systemen op een zinvolle manier te gebruiken. Resultaten Dit onderzoek loopt. Na afloop vind je hier een samenvatting van de resultaten. In juli 2019 verscheen het volgende artikel van de onderzoekers: Automated Feedback for Workplace Learning in Higher Education. Looptijd 01 september 2017 - 31 december 2020 Aanpak We hebben eerst verkennend onderzoek gedaan door een case study waarin onderzocht is wat de effecten zijn van verschillende keuzes in de data cleaning op de uitkomsten van de data-analyse. Vanaf september 2019 gaan we onderzoeken welke datavaardigheden studenten nodig hebben om learning analytics-systemen effectief te gebruiken.
De analyse van data over het leren van studenten kan waardevol zijn. 'Learning analytics' gebruikt studentdata om het leerproces te verbeteren. Welke organisatorische vaardigheden hebben Nederlandse instellingen voor hoger onderwijs nodig om learning analytics succesvol in te zetten?Doel We onderzoeken welke organisatievaardigheden er nodig zijn om in het hoger onderwijs met 'learning analytics' te werken. Met learning analytics krijgen studenten, docenten en studiebegeleiders inzicht in het leerproces. Dit doen ze door data van studenten te analyseren. In de praktijk blijkt het lastig voor onderwijsinstellingen om hier over de hele breedte van de organisatie mee te gaan werken. We kijken in dit onderzoek welke vaardigheden er nodig zijn binnen een organisatie om 'learning analytics' slim in te zetten. Resultaten Dit onderzoek loopt. Tot nu toe hebben we drie wetenschappelijke artikelen gepubliceerd: A First Step Towards Learning Analytics: Implementing an Experimental Learning Analytics Tool Where is the learning in learning analytics? A systematic literature review to identify measures of affected learning From Dirty Data to Multiple Versions of Truth: How Different Choices in Data Cleaning Lead to Different Learning Analytics Outcomes Looptijd 01 december 2016 - 01 december 2020 Aanpak Het onderzoek bestaat uit literatuuronderzoek, een case study bij Nederlandse onderwijsinstellingen en een validatieproject. Dit leidt tot de ontwikkeling van een Learning Analytics Capability Model (LACM): een model dat beschrijft welke organisatorische vaardigheden nodig zijn om learning analytics in de praktijk toe te passen.