This method paper presents a template solution for text mining of scientific literature using the R tm package. Literature to be analyzed can be collected manually or automatically using the code provided with this paper. Once the literature is collected, the three steps for conducting text mining can be performed as outlined below:• loading and cleaning of text from articles,• processing, statistical analysis, and clustering, and• presentation of results using generalized and tailor-made visualizations.The text mining steps can be applied to a single, multiple, or time series groups of documents.References are provided to three published peer reviewed articles that use the presented text mining methodology. The main advantages of our method are: (1) Its suitability for both research and educational purposes, (2) Compliance with the Findable Accessible Interoperable and Reproducible (FAIR) principles, and (3) code and example data are made available on GitHub under the open-source Apache V2 license.
DOCUMENT
Research into automatic text simplification aims to promote access to information for all members of society. To facilitate generalizability, simplification research often abstracts away from specific use cases, and targets a prototypical reader and an underspecified content creator. In this paper, we consider a real-world use case – simplification technology for use in Dutch municipalities – and identify the needs of the content creators and the target audiences in this scenario. The stakeholders envision a system that (a) assists the human writer without taking over the task; (b) provides diverse outputs, tailored for specific target audiences; and (c) explains the suggestions that it outputs. These requirements call for technology that is characterized by modularity, explainability, and variability. We argue that these are important research directions that require further exploration
MULTIFILE
This paper proposes an amendment of the classification of safety events based on their controllability and contemplates the potential of an event to escalate into higher severity classes. It considers (1) whether the end-user had the opportunity to intervene into the course of an event, (2) the level of end-user familiarity with the situation, and (3) the positive or negative effects of end-user intervention against expected outcomes. To examine its potential, we applied the refined classification to 296 aviation safety investigation reports. The results suggested that pilots controlled only three-quarters of the occurrences, more than three-thirds of the controlled cases regarded fairly unfamiliar situations, and the flight crews succeeded to mitigate the possible negative consequences of events in about 71% of the cases. Further statistical tests showed that the controllability-related characteristics of events had not significantly changed over time, and they varied across regions, aircraft, operational and event characteristics, as well as when fatigue had contributed to the occurrences. Overall, the findings demonstrated the value of using the controllability classification before considering the actual outcomes of events as means to support the identification of system resilience and successes. The classification can also be embedded in voluntary reporting systems to allow end-users to express the degree of each of the controllability characteristics so that management can monitor them over time and perform internal and external benchmarking. The mandatory reports concerned, the classification could function as a decision-making parameter for prioritising incident investigations.
DOCUMENT
This Professional Doctorate (PD) project explores the intersection of artistic research, digital heritage, and interactive media, focusing on the reimagining of medieval Persian bestiaries through high dark fantasy and game-making. The research investigates how the process of creation with interactive 3D media can function as a memory practice. At its core, the project treats bestiaries—pre-modern collections of real and imaginary classifications of the world—as a window into West and Central Asian flora, fauna, and the landscape of memory, serving as both repositories of knowledge and imaginative, cosmological accounts of the more-than-human world. As tools for exploring non-human pre-modern agency, bestiaries offer a medium of speculative storytelling, and explicate the unstable nature of memory in diasporic contexts. By integrating these themes into an interactive digital world, the research develops new methodologies for artistic research, treating world-building as a technique of attunement to heritage. Using a practice-based approach, the project aligns with MERIAN’s emphasis on "research in the wild," where artistic and scientific inquiries merge in experimental ways. It engages with hard-core game mechanics, mythopoetic decompressed environmental storytelling, and hand-crafted detailed intentional world-building to offer new ways of interacting with the past that challenges nostalgia and monumentalization. How can a cultural practice do justice to other, more experimental forms of remembering and encountering cultural pasts, particularly those that embrace the interconnections between human and non-human entities? Specifically, how can artistic practice, through the medium of a virtual, bestiary-inspired dark fantasy interactive media, allow for new modes of remembering that resist idealized and monumentalized histories? What forms of inquiry can emerge when technology (3D media, open-world interactive digital media) becomes a tool of attention and a site of experimental attunement to cosmological heritage?
Organisations are increasingly embedding Artificial Intelligence (AI) techniques and tools in their processes. Typical examples are generative AI for images, videos, text, and classification tasks commonly used, for example, in medical applications and industry. One danger of the proliferation of AI systems is the focus on the performance of AI models, neglecting important aspects such as fairness and sustainability. For example, an organisation might be tempted to use a model with better global performance, even if it works poorly for specific vulnerable groups. The same logic can be applied to high-performance models that require a significant amount of energy for training and usage. At the same time, many organisations recognise the need for responsible AI development that balances performance with fairness and sustainability. This KIEM project proposal aims to develop a tool that can be employed by organizations that develop and implement AI systems and aim to do so more responsibly. Through visual aiding and data visualisation, the tool facilitates making these trade-offs. By showing what these values mean in practice, which choices could be made and highlighting the relationship with performance, we aspire to educate users on how the use of different metrics impacts the decisions made by the model and its wider consequences, such as energy consumption or fairness-related harms. This tool is meant to facilitate conversation between developers, product owners and project leaders to assist them in making their choices more explicit and responsible.