Automated Analysis of Human Performance Data could help to understand and possibly predict the performance of the human. To inform future research and enable Automated Analysis of Human Performance Data a systematic mapping study (scoping study) on the state-of-the-art knowledge is performed on three interconnected components(i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis. Using a systematic method of Kitchenham and Charters for performing the systematic mapping study, resulted in a comprehensive search for studies and a categorisation the studies using a qualitative method. This systematic mapping review extends the philosophy of Shyr and Spisic, and Knuth and represents the state-of-art knowledge on Human Performance,Monitoring Human Performance and Automated Data Analysis
DOCUMENT
The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors’ conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study’s conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.
DOCUMENT
To cope with changing demands from society, higher education institutes are developing adaptive curricula in which a suitable integration of workplace learning is an important factor. Automated feedback can be used as part of formative assessment strategies to enhance student learning in the workplace. However due to the complex and diverse nature of workplace learning processes, it is difficult to align automated feedback to the needs of the individual student. The main research question we aim to answer in this design-based study is: ‘How can we support higher education students’ reflective learning in the workplace by providing automated feedback while learning in the workplace?’. Iterative development yielded 1) a framework for automated feedback in workplace learning, 2) design principles and guidelines and 3) an application prototype implemented according to this framework and design knowledge. In the near future, we plan to evaluate and improve these tentative products in pilot studies. https://link.springer.com/chapter/10.1007/978-3-030-25264-9_6
DOCUMENT
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
Various companies in diagnostic testing struggle with the same “valley of death” challenge. In order to further develop their sensing application, they rely on the technological readiness of easy and reproducible read-out systems. Photonic chips can be very sensitive sensors and can be made application-specific when coated with a properly chosen bio-functionalized layer. Here the challenge lies in the optical coupling of the active components (light source and detector) to the (disposable) photonic sensor chip. For the technology to be commercially viable, the price of the disposable photonic sensor chip should be as low as possible. The coupling of light from the source to the photonic sensor chip and back to the detectors requires a positioning accuracy of less than 1 micrometer, which is a tremendous challenge. In this research proposal, we want to investigate which of the six degrees of freedom (three translational and three rotational) are the most crucial when aligning photonic sensor chips with the external active components. Knowing these degrees of freedom and their respective range we can develop and test an automated alignment tool which can realize photonic sensor chip alignment reproducibly and fully autonomously. The consortium with expertise and contributions in the value chain of photonics interfacing, system and mechanical engineering will investigate a two-step solution. This solution comprises a passive pre-alignment step (a mechanical stop determines the position), followed by an active alignment step (an algorithm moves the source to the optimal position with respect to the chip). The results will be integrated into a demonstrator that performs an automated procedure that aligns a passive photonic chip with a terminal that contains the active components. The demonstrator is successful if adequate optical coupling of the passive photonic chip with the external active components is realized fully automatically, without the need of operator intervention.
The maximum capacity of the road infrastructure is being reached due to the number of vehicles that are being introduced on Dutch roads each day. One of the plausible solutions to tackle congestion could be efficient and effective use of road infrastructure using modern technologies such as cooperative mobility. Cooperative mobility relies majorly on big data that is generated potentially by millions of vehicles that are travelling on the road. But how can this data be generated? Modern vehicles already contain a host of sensors that are required for its operation. This data is typically circulated within an automobile via the CAN bus and can in-principle be shared with the outside world considering the privacy aspects of data sharing. The main problem is, however, the difficulty in interpreting this data. This is mainly because the configuration of this data varies between manufacturers and vehicle models and have not been standardized by the manufacturers. Signals from the CAN bus could be manually reverse engineered, but this process is extremely labour-intensive and time-consuming. In this project we investigate if an intelligent tool or specific test procedures could be developed to extract CAN messages and their composition efficiently irrespective of vehicle brand and type. This would lay the foundations that are required to generate big data-sets from in-vehicle data efficiently.