We developed an application which allows learners to construct qualitative representations of dynamic systems to aid them in learning subject content knowledge and system thinking skills simultaneously. Within this application, we implemented a lightweight support function which automatically generates help from a norm-representation to aid learners as they construct these qualitative representations. This support can be expected to improve learning. Using this function it is not necessary to define in advance possible errors that learners may make and the subsequent feedback. Also, no data from (previous) learners is required. Such a lightweight support function is ideal for situations where lessons are designed for a wide variety of topics for small groups of learners. Here, we report on the use and impact of this support function in two lessons: Star Formation and Neolithic Age. A total of 63 ninth-grade learners from secondary school participated. The study used a pretest/intervention/post-test design with two conditions (no support vs. support) for both lessons. Learners with access to the support create better representations, learn more subject content knowledge, and improve their system thinking skills. Learners use the support throughout the lessons, more often than they would use support from the teacher. We also found no evidence for misuse, i.e., 'gaming the system', of the support function.
DOCUMENT
To cope with changing demands from society, higher education institutes are developing adaptive curricula in which a suitable integration of workplace learning is an important factor. Automated feedback can be used as part of formative assessment strategies to enhance student learning in the workplace. However due to the complex and diverse nature of workplace learning processes, it is difficult to align automated feedback to the needs of the individual student. The main research question we aim to answer in this design-based study is: ‘How can we support higher education students’ reflective learning in the workplace by providing automated feedback while learning in the workplace?’. Iterative development yielded 1) a framework for automated feedback in workplace learning, 2) design principles and guidelines and 3) an application prototype implemented according to this framework and design knowledge. In the near future, we plan to evaluate and improve these tentative products in pilot studies. https://link.springer.com/chapter/10.1007/978-3-030-25264-9_6
DOCUMENT
The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors’ conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study’s conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.
DOCUMENT
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.
Aanleiding Nieuwsuitgeverijen bevinden zich in zwaar weer. Economische malaise en toegenomen concurrentie in het pluriforme medialandschap dwingen uitgeverijen om enerzijds kosten te besparen en tegelijkertijd te investeren in innovatie. De verdere automatisering van de nieuwsredactie vormt hierbij een uitdaging. Buiten de branche ontstaan technieken die uitgeverijen hierbij zouden kunnen gebruiken. Deze zijn nog niet 'vertaald' naar gebruiksvriendelijke systemen voor redactieprocessen. De deelnemers aan het project formuleren voor dit braakliggend terrein een praktijkgericht onderzoek. Doelstelling Dit onderzoek wil antwoord geven op de vraag: Hoe kunnen bewezen en nieuw te ontwikkelen technieken uit het domein van 'natural language processing' een bijdrage leveren aan de automatisering van een nieuwsredactie en het journalistieke product? 'Natural language processing' - het automatisch genereren van taal - is het onderwerp van het onderzoek. In het werkveld staat deze ontwikkeling bekend als 'automated journalism' of 'robotjournalistiek'. Het onderzoek richt zich enerzijds op ontwikkeling van algoritmes ('robots') en anderzijds op de impact van deze technologische ontwikkelingen op het nieuwsveld. De impact wordt onderzocht uit zowel het perspectief van de journalist als de nieuwsconsument. De projectdeelnemers ontwikkelen binnen dit onderzoek twee prototypes die samen het automated-journalismsysteem vormen. Dit systeem gaat tijdens en na het project gebruikt worden door onderzoekers, journalisten, docenten en studenten. Beoogde resultaten Het concrete resultaat van het project is een prototype van een geautomatiseerd redactiesysteem. Verder levert het project inzicht op in de verankering van dit soort systemen binnen een nieuwsredactie. Het onderzoek biedt een nieuw perspectief op de manier waarop de nieuwsconsument de ontwikkeling van 'automated journalism' in Nederland waardeert. Het projectteam deelt de onderzoekresultaten door middel van presentaties voor de uitgeverijbranche, presentaties op wetenschappelijke conferenties, publicaties in (vak)tijdschriften, reflectiebijeenkomsten met collega-opleidingen en een samenvattende white paper.