Research into automatic text simplification aims to promote access to information for all members of society. To facilitate generalizability, simplification research often abstracts away from specific use cases, and targets a prototypical reader and an underspecified content creator. In this paper, we consider a real-world use case – simplification technology for use in Dutch municipalities – and identify the needs of the content creators and the target audiences in this scenario. The stakeholders envision a system that (a) assists the human writer without taking over the task; (b) provides diverse outputs, tailored for specific target audiences; and (c) explains the suggestions that it outputs. These requirements call for technology that is characterized by modularity, explainability, and variability. We argue that these are important research directions that require further exploration
MULTIFILE
Author supplied Business rules play a critical role in an organization’s daily activities. With the increased use of business rules (solutions) the interest in modelling guidelines that address the manageability of business rules has increased as well. However, current research on modelling guidelines is mainly based on a theoretical view of modifications that can occur to a business rule set. Research on actual modifications that occur in practice is limited. The goal of this study is to identify modifications that can occur to a business rule set and underlying business rules. To accomplish this goal we conducted a grounded theory study on 229 rules set, as applied from March 2006 till June 2014, by the National Health Service. In total 3495 modifications have been analysed from which we defined eleven modification categories that can occur to a business rule set. The classification provides a framework for the analysis and design of business rules management architectures.
DOCUMENT
Objective: To annotate a corpus of randomized controlled trial (RCT) publications with the checklist items of CONSORT reporting guidelines and using the corpus to develop text mining methods for RCT appraisal. Methods: We annotated a corpus of 50 RCT articles at the sentence level using 37 fine-grained CONSORT checklist items. A subset (31 articles) was double-annotated and adjudicated, while 19 were annotated by a single annotator and reconciled by another. We calculated inter-annotator agreement at the article and section level using MASI (Measuring Agreement on Set-Valued Items) and at the CONSORT item level using Krippendorff's α. We experimented with two rule-based methods (phrase-based and section header-based) and two supervised learning approaches (support vector machine and BioBERT-based neural network classifiers), for recognizing 17 methodology-related items in the RCT Methods sections. Results: We created CONSORT-TM consisting of 10,709 sentences, 4,845 (45%) of which were annotated with 5,246 labels. A median of 28 CONSORT items (out of possible 37) were annotated per article. Agreement was moderate at the article and section levels (average MASI: 0.60 and 0.64, respectively). Agreement varied considerably among individual checklist items (Krippendorff's α= 0.06–0.96). The model based on BioBERT performed best overall for recognizing methodology-related items (micro-precision: 0.82, micro-recall: 0.63, micro-F1: 0.71). Combining models using majority vote and label aggregation further improved precision and recall, respectively. Conclusion: Our annotated corpus, CONSORT-TM, contains more fine-grained information than earlier RCT corpora. Low frequency of some CONSORT items made it difficult to train effective text mining models to recognize them. For the items commonly reported, CONSORT-TM can serve as a testbed for text mining methods that assess RCT transparency, rigor, and reliability, and support methods for peer review and authoring assistance. Minor modifications to the annotation scheme and a larger corpus could facilitate improved text mining models. CONSORT-TM is publicly available at https://github.com/kilicogluh/CONSORT-TM.
DOCUMENT
Objective:Acknowledging study limitations in a scientific publication is a crucial element in scientific transparency and progress. However, limitation reporting is often inadequate. Natural language processing (NLP) methods could support automated reporting checks, improving research transparency. In this study, our objective was to develop a dataset and NLP methods to detect and categorize self-acknowledged limitations (e.g., sample size, blinding) reported in randomized controlled trial (RCT) publications.Methods:We created a data model of limitation types in RCT studies and annotated a corpus of 200 full-text RCT publications using this data model. We fine-tuned BERT-based sentence classification models to recognize the limitation sentences and their types. To address the small size of the annotated corpus, we experimented with data augmentation approaches, including Easy Data Augmentation (EDA) and Prompt-Based Data Augmentation (PromDA). We applied the best-performing model to a set of about 12K RCT publications to characterize self-acknowledged limitations at larger scale.Results:Our data model consists of 15 categories and 24 sub-categories (e.g., Population and its sub-category DiagnosticCriteria). We annotated 1090 instances of limitation types in 952 sentences (4.8 limitation sentences and 5.5 limitation types per article). A fine-tuned PubMedBERT model for limitation sentence classification improved upon our earlier model by about 1.5 absolute percentage points in F1 score (0.821 vs. 0.8) with statistical significance (). Our best-performing limitation type classification model, PubMedBERT fine-tuning with PromDA (Output View), achieved an F1 score of 0.7, improving upon the vanilla PubMedBERT model by 2.7 percentage points, with statistical significance ().Conclusion:The model could support automated screening tools which can be used by journals to draw the authors’ attention to reporting issues. Automatic extraction of limitations from RCT publications could benefit peer review and evidence synthesis, and support advanced methods to search and aggregate the evidence from the clinical trial literature.
MULTIFILE
The main goal of this study was to investigate if a computational analyses of text data from the National Student Survey (NSS) can add value to the existing, manual analysis. The results showed the computational analysis of the texts from the open questions of the NSS contain information which enriches the results of standard quantitative analysis of the NSS.
DOCUMENT
The goal of this study was therefore to test the idea that computationally analysing the Fontys National Student Surveys (NSS) open answers using a selection of standard text mining methods (Manning & Schütze 1999) will increase the value of these answers for educational quality assurance. It is expected that human effort and time of analysis will decrease significally. The text data (in Dutch) of several years of Fontys National Student Surveys (2013-2018) was provided to Fontys students of the minor Applied Data Science. The results of the analysis were to include topic and sentiment modelling across multiple years of survey data. Comparing multiple years was necessary to capture and visualize any trends that a human investigator may have missed while analysing the data by hand. During data cleaning all stop words and punctuation were removed, all text was brought to a lower case, names and inappropriate language – such as swear words – were deleted. About 80% of 24.000 records were manually labelled with sentiment; reminder was used for algorithms’ validation. In the following step a machine learning analysis steps: training, testing, outcomes analysis and visualisation, for a better text comprehension, were executed. Students aimed to improve classification accuracy by applying multiple sentiment analysis algorithms and topics modelling methods. The models were chosen arbitrarily, with a preference for a low complexity of a model. For reproducibility of our study open source tooling was used. One of these tools was based on Latent Dirichlet allocation (LDA). LDA is a generative statistical model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar (Blei, Ng & Jordan, 2003). For topic modelling the Gensim (Řehůřek, 2011) method was used. Gensim is an open-source vector space modelling and topic modelling toolkit implemented in Python. In addition, we recognized the absence of pretrained models for Dutch language. To complete our prototype a simple user interface was created in Python. This final step integrated our automated text analysis with visualisations of sentiments and topics. Remarkably, all extracted topics are related to themes defined by the NSS. This indicates that in general students’ answers are related to topics of interest for educational institutions. The extracted list of the words related to the topic is also relevant to this topic. Despite the fact that most of the results require further human expert interpretation, it is indicative to conclude that the computational analysis of the texts from the open questions of the NSS contain information which enriches the results of standard quantitative analysis of the NSS.
DOCUMENT
This article offers the first substantial survey of the Middle Dutch satire Dit es de Frenesie since the work of C.P. Serrure in the mid nineteenth century. It contests much of the conventional wisdom surrounding De Frenesie, challenging the poem's usual classification as an early boerde or fabliau. Instead it is argued that the text is an experimental work, which blends together elements of several satiric traditions without committing itself to any one. The implications of this maneuver and others within the text are considered, revealing the poem's clear sympathy with the newly educated and articulate laity. De Frenesie itself is appended in both the original Middle Dutch and an English verse translation.
DOCUMENT
A common strategy to assign keywords to documents is to select the most appropriate words from the document text. One of the most important criteria for a word to be selected as keyword is its relevance for the text. The tf.idf score of a term is a widely used relevance measure. While easy to compute and giving quite satisfactory results, this measure does not take (semantic) relations between words into account. In this paper we study some alternative relevance measures that do use relations between words. They are computed by defining co-occurrence distributions for words and comparing these distributions with the document and the corpus distribution. We then evaluate keyword extraction algorithms defined by selecting different relevance measures. For two corpora of abstracts with manually assigned keywords, we compare manually extracted keywords with different automatically extracted ones. The results show that using word co-occurrence information can improve precision and recall over tf.idf.
DOCUMENT
The search for existing non-animal alternative methods for use in experiments is currently challenging because of the lack of both comprehensive structured databases and balanced keyword-based search strategies to mine unstructured textual databases. In this paper we describe 3Ranker, which is a fast, keyword-independent algorithm for finding non-animal alternative methods for use in biomedical research. The 3Ranker algorithm was created by using a machine learning approach, consisting of a Random Forest model built on a dataset of 35 million abstracts and constructed with weak supervision, followed by iterative model improvement with expert curated data. We found a satisfactory trade-off between sensitivity and specificity, with Area Under the Curve (AUC) values ranging from 0.85-0.95. Trials showed that the AI-based classifier was able to identify articles that describe potential alternatives to animal use, among the thousands of articles returned by generic PubMed queries on dermatitis and Parkinson's disease. Application of the classification models on time series data showed the earlier implementation and acceptance of Three Rs principles in the area of cosmetics and skin research, as compared to the area of neurodegenerative disease research. The 3Ranker algorithm is freely available at www.open3r.org; the future goal is to expand this framework to cover multiple research domains and to enable its broad use by researchers, policymakers, funders and ethical review boards, in order to promote the replacement of animal use in research wherever possible.
DOCUMENT
We propose a combined visual and text-based programming environment based on the actor model suitable for novice to expert programmers. This model encompasses simple communicating entities which easily scale from utilizing threads inside the computer to massive distributed computer systems. To design our proposed environment we classify different levels of programming users encounter when dealing with technologies in creative scenarios. We use this classification system as a foundation to design our proposed environment to support (novice) users on their way to a next level. This framework not only intends to support modern computing power through a concurrent programming paradigm, but is also intended to let users interact with it on the different classification levels.
DOCUMENT