From the article: "The term Internet of Things (IoT) is used for situations where one or more devices are connected to a network or possibly the Internet. Most studies focus on the possibilities that arise when a device is capable to share its data with other devices or humans. In this study, the focus is on the device itself and what kind of possibilities an Internet connection gives to the device and its owner or user. Also the data the device needs to participate in a smart way in the IoT are part of this study. Agent technology is the enabling technology for the ideas introduced here. A proof of concept is given, where some concepts proposed in the paper are put into practice."
MULTIFILE
The research described in this paper provides insights into tools and methods which are used by professional information workers to keep and to manage their personal information. A literature study was carried out on 23 scholar papers and articles, retrieved from the ACM Digital Library and Library and Information Science Abstracts (LISA). The research questions were: - How do information workers keep and manage their information sources? - What aims do they have when building personal information collections? - What problems do they experience with the use and management of their personal collections? The main conclusion from the literature is that professional information workers use different tools and approaches for personal information management, depending on their personal style, the types of information in their collections and the devices which they use for retrieval. The main problem that they experience is that of information fragmentation over different collections and different devices. These findings can provide input for improvement of information literacy curricula in Higher Education. It has been remarked that scholar research and literature on Personal Information Management do not pay a lot of attention to the keeping and management of (bibliographic) data from external documentation. How people process the information from those sources and how this stimulates their personal learning, is completely overlooked. [The original publication is available at www.elpub.net]
DOCUMENT
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.
Today, embedded devices such as banking/transportation cards, car keys, and mobile phones use cryptographic techniques to protect personal information and communication. Such devices are increasingly becoming the targets of attacks trying to capture the underlying secret information, e.g., cryptographic keys. Attacks not targeting the cryptographic algorithm but its implementation are especially devastating and the best-known examples are so-called side-channel and fault injection attacks. Such attacks, often jointly coined as physical (implementation) attacks, are difficult to preclude and if the key (or other data) is recovered the device is useless. To mitigate such attacks, security evaluators use the same techniques as attackers and look for possible weaknesses in order to “fix” them before deployment. Unfortunately, the attackers’ resourcefulness on the one hand and usually a short amount of time the security evaluators have (and human errors factor) on the other hand, makes this not a fair race. Consequently, researchers are looking into possible ways of making security evaluations more reliable and faster. To that end, machine learning techniques showed to be a viable candidate although the challenge is far from solved. Our project aims at the development of automatic frameworks able to assess various potential side-channel and fault injection threats coming from diverse sources. Such systems will enable security evaluators, and above all companies producing chips for security applications, an option to find the potential weaknesses early and to assess the trade-off between making the product more secure versus making the product more implementation-friendly. To this end, we plan to use machine learning techniques coupled with novel techniques not explored before for side-channel and fault analysis. In addition, we will design new techniques specially tailored to improve the performance of this evaluation process. Our research fills the gap between what is known in academia on physical attacks and what is needed in the industry to prevent such attacks. In the end, once our frameworks become operational, they could be also a useful tool for mitigating other types of threats like ransomware or rootkits.
The integration of renewable energy resources, controllable devices and energy storage into electricity distribution grids requires Decentralized Energy Management to ensure a stable distribution process. This demands the full integration of information and communication technology into the control of distribution grids. Supervisory Control and Data Acquisition (SCADA) is used to communicate measurements and commands between individual components and the control server. In the future this control is especially needed at medium voltage and probably also at the low voltage. This leads to an increased connectivity and thereby makes the system more vulnerable to cyber-attacks. According to the research agenda NCSRA III, the energy domain is becoming a prime target for cyber-attacks, e.g., abusing control protocol vulnerabilities. Detection of such attacks in SCADA networks is challenging when only relying on existing network Intrusion Detection Systems (IDSs). Although these systems were designed specifically for SCADA, they do not necessarily detect malicious control commands sent in legitimate format. However, analyzing each command in the context of the physical system has the potential to reveal certain inconsistencies. We propose to use dedicated intrusion detection mechanisms, which are fundamentally different from existing techniques used in the Internet. Up to now distribution grids are monitored and controlled centrally, whereby measurements are taken at field stations and send to the control room, which then issues commands back to actuators. In future smart grids, communication with and remote control of field stations is required. Attackers, who gain access to the corresponding communication links to substations can intercept and even exchange commands, which would not be detected by central security mechanisms. We argue that centralized SCADA systems should be enhanced by a distributed intrusion-detection approach to meet the new security challenges. Recently, as a first step a process-aware monitoring approach has been proposed as an additional layer that can be applied directly at Remote Terminal Units (RTUs). However, this allows purely local consistency checks. Instead, we propose a distributed and integrated approach for process-aware monitoring, which includes knowledge about the grid topology and measurements from neighboring RTUs to detect malicious incoming commands. The proposed approach requires a near real-time model of the relevant physical process, direct and secure communication between adjacent RTUs, and synchronized sensor measurements in trustable real-time, labeled with accurate global time-stamps. We investigate, to which extend the grid topology can be integrated into the IDS, while maintaining near real-time performance. Based on topology information and efficient solving of power flow equation we aim to detect e.g. non-consistent voltage drops or the occurrence of over/under-voltage and -current. By this, centrally requested switching commands and transformer tap change commands can be checked on consistency and safety based on the current state of the physical system. The developed concepts are not only relevant to increase the security of the distribution grids but are also crucial to deal with future developments like e.g. the safe integration of microgrids in the distribution networks or the operation of decentralized heat or biogas networks.