Game development businesses often choose Lua for separating scripted game logic from reusable engine code. Lua can easily be embedded, has simple interfaces, and offers a powerful and extensible scripting language. Using Lua, developers can create prototypes and scripts at early development stages. However, when larger quantities of engine code and script are available, developers encounter maintainability and quality problems. First, the available automated solutions for interoperability do not take domain-specific optimizations into account. Maintaining a coupling by hand between the Lua interpreter and the engine code, usually in C++, is labour intensive and error-prone. Second, assessing the quality of Lua scripts is hard due to a lack of tools that support static analysis. Lua scripts for dynamic analysis only report warnings and errors at run-time and are limited to code coverage. A common solution to the first problem is developing an Interface Definition Language (IDL) from which ”glue code”, interoperability code between interfaces, is generated automatically. We address quality problems by proposing a method to complement techniques for Lua analysis. We introduce Lua AiR (Lua Analysis in Rascal), a framework for static analysis of Lua script in its embedded context, using IDL models and Rascal.
The present study aims at understanding and addressing certain challenges of automation of composite repairs. This research is part of a larger, SIA-RAAK funded project FIXAR, running in three Universities of Applied Sciences in the Netherlands and a cluster of knowledge institutions and industry partners.The approach followed in the current study, consists of three steps. First, the identification of the feasibility and most promising procedures for automated composite repair by analysis of current state-of-the-art methods as prescribed by OEMs and standards. Processes which are tedious or even contain health risks may qualify for automation. Second, a comparison of curing alternatives for composite repairs is made, by means of the creation and testing of specimen using different curing strategies. Lastly, a benchmark test of human made composite repairs is used in order to set a reference baseline for automation quality. This benchmark can be then applied to define a lower limit and prevent over-optimization. The employed methodology includes data collection, analysis, modelling and experiments.
The goal of this study was to develop an automated monitoring system for the detection of pigs’ bodies, heads and tails. The aim in the first part of the study was to recognize individual pigs (in lying and standing positions) in groups and their body parts (head/ears, and tail) by using machine learning algorithms (feature pyramid network). In the second part of the study, the goal was to improve the detection of tail posture (tail straight and curled) during activity (standing/moving around) by the use of neural network analysis (YOLOv4). Our dataset (n = 583 images, 7579 pig posture) was annotated in Labelbox from 2D video recordings of groups (n = 12–15) of weaned pigs. The model recognized each individual pig’s body with a precision of 96% related to threshold intersection over union (IoU), whilst the precision for tails was 77% and for heads this was 66%, thereby already achieving human-level precision. The precision of pig detection in groups was the highest, while head and tail detection precision were lower. As the first study was relatively time-consuming, in the second part of the study, we performed a YOLOv4 neural network analysis using 30 annotated images of our dataset for detecting straight and curled tails. With this model, we were able to recognize tail postures with a high level of precision (90%)
MULTIFILE
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.
Aanleiding Nieuwsuitgeverijen bevinden zich in zwaar weer. Economische malaise en toegenomen concurrentie in het pluriforme medialandschap dwingen uitgeverijen om enerzijds kosten te besparen en tegelijkertijd te investeren in innovatie. De verdere automatisering van de nieuwsredactie vormt hierbij een uitdaging. Buiten de branche ontstaan technieken die uitgeverijen hierbij zouden kunnen gebruiken. Deze zijn nog niet 'vertaald' naar gebruiksvriendelijke systemen voor redactieprocessen. De deelnemers aan het project formuleren voor dit braakliggend terrein een praktijkgericht onderzoek. Doelstelling Dit onderzoek wil antwoord geven op de vraag: Hoe kunnen bewezen en nieuw te ontwikkelen technieken uit het domein van 'natural language processing' een bijdrage leveren aan de automatisering van een nieuwsredactie en het journalistieke product? 'Natural language processing' - het automatisch genereren van taal - is het onderwerp van het onderzoek. In het werkveld staat deze ontwikkeling bekend als 'automated journalism' of 'robotjournalistiek'. Het onderzoek richt zich enerzijds op ontwikkeling van algoritmes ('robots') en anderzijds op de impact van deze technologische ontwikkelingen op het nieuwsveld. De impact wordt onderzocht uit zowel het perspectief van de journalist als de nieuwsconsument. De projectdeelnemers ontwikkelen binnen dit onderzoek twee prototypes die samen het automated-journalismsysteem vormen. Dit systeem gaat tijdens en na het project gebruikt worden door onderzoekers, journalisten, docenten en studenten. Beoogde resultaten Het concrete resultaat van het project is een prototype van een geautomatiseerd redactiesysteem. Verder levert het project inzicht op in de verankering van dit soort systemen binnen een nieuwsredactie. Het onderzoek biedt een nieuw perspectief op de manier waarop de nieuwsconsument de ontwikkeling van 'automated journalism' in Nederland waardeert. Het projectteam deelt de onderzoekresultaten door middel van presentaties voor de uitgeverijbranche, presentaties op wetenschappelijke conferenties, publicaties in (vak)tijdschriften, reflectiebijeenkomsten met collega-opleidingen en een samenvattende white paper.