Crime script analysis as a methodology to analyse criminal processes is underdeveloped. This is apparent from the various approaches in which scholars apply crime scripting and present their cybercrime scripts. The plethora of scripting methods raise significant concerns about the reliability and validity of these scripting studies. In this methodological paper, we demonstrate how object-oriented modelling (OOM) could address some of the currently identified methodological issues, thereby refining crime script analysis. More specifically, we suggest to visualise crime scripts using static and dynamic modelling with the Unified Modelling Language (UML) to harmonise cybercrime scripts without compromising their depth. Static models visualise objects in a system or process, their attributes and their relationships. Dynamic models visualise actions and interactions during a process. Creating these models in addition to the typical textual narrative could aid analysts to more systematically consider, organise and relate key aspects of crime scripts. In turn, this approach might, amongst others, facilitate alternative ways of identifying intervention measures, theorising about offender decision-making, and an improved shared understanding of the crime phenomenon analysed. We illustrate the application of these models with a phishing script.
MULTIFILE
To study the ways in which compounds can induce adverse effects, toxicologists have been constructing Adverse Outcome Pathways (AOPs). An AOP can be considered as a pragmatic tool to capture and visualize mechanisms underlying different types of toxicity inflicted by any kind of stressor, and describes the interactions between key entities that lead to the adverse outcome on multiple biological levels of organization. The construction or optimization of an AOP is a labor intensive process, which currently depends on the manual search, collection, reviewing and synthesis of available scientific literature. This process could however be largely facilitated using Natural Language Processing (NLP) to extract information contained in scientific literature in a systematic, objective, and rapid manner that would lead to greater accuracy and reproducibility. This would support researchers to invest their expertise in the substantive assessment of the AOPs by replacing the time spent on evidence gathering by a critical review of the data extracted by NLP. As case examples, we selected two frequent adversities observed in the liver: namely, cholestasis and steatosis denoting accumulation of bile and lipid, respectively. We used deep learning language models to recognize entities of interest in text and establish causal relationships between them. We demonstrate how an NLP pipeline combining Named Entity Recognition and a simple rules-based relationship extraction model helps screen compounds related to liver adversities in the literature, but also extract mechanistic information for how such adversities develop, from the molecular to the organismal level. Finally, we provide some perspectives opened by the recent progress in Large Language Models and how these could be used in the future. We propose this work brings two main contributions: 1) a proof-of-concept that NLP can support the extraction of information from text for modern toxicology and 2) a template open-source model for recognition of toxicological entities and extraction of their relationships. All resources are openly accessible via GitHub (https://github.com/ontox-project/en-tox).
DOCUMENT
Business rules play a critical role during decision making when executing business processes. Existing modelling techniques for business rules offer modellers guidelines on how to create models that are consistent, complete and syntactically correct. However, modelling guidelines that address manageability in terms of anomalies such as insertion, update and deletion are not widely available. This paper presents a normalisation procedure that provides guidelines for managing and organising business rules. The procedure is evaluated by means of an experiment based on existing case study material. Results show that the procedure is useful for minimising insertion and deletion anomalies.
DOCUMENT