Abstract Healthcare organizations operate within a network of governments, insurers, inspection services and other healthcare organizations to provide clients with the best possible care. The parties involved must collaborate and are accountable to each other for the care provided. This has led to a diversity of administrative processes that are supported by a multi-system landscape, resulting in administrative burdens among healthcare professionals. Management methods, such as Enterprise Architecture (EA), should help to develop and manage such landscapes, but they are systematic, while the network of healthcare parties is dynamic. The aim of this research is therefore to develop an EA framework that fits the dynamics of network organizations (such as long-term healthcare). This research proposal outlines the practical and scientific relevance of this research and the proposed method. The current status and next steps are also described.
DOCUMENT
Many quality aspects of software systems are addressed in the existing literature on software architecture patterns. But the aspect of system administration seems to be a bit overlooked, even though it is an important aspect too. In this work we present three software architecture patterns that, when applied by software architects, support the work of system administrators: PROVIDE AN ADMINISTRATION API, SINGLE FILE LOCATION, and CENTRALIZED SYSTEM LOGGING. PROVIDE AN ADMINISTRATION API should solve problems encountered when trying to automate administration tasks. The SINGLE FILE LOCATION pattern should help system administrators to find the files of an application in one (hierarchical) place. CENTRALIZED SYSTEM LOGGING is useful to prevent coming up with several logging formats and locations. Abstract provided by the authors. Published in PLoP '13: Proceedings of the 20th Conference on Pattern Languages of Programs ACM.
DOCUMENT
Objective:Acknowledging study limitations in a scientific publication is a crucial element in scientific transparency and progress. However, limitation reporting is often inadequate. Natural language processing (NLP) methods could support automated reporting checks, improving research transparency. In this study, our objective was to develop a dataset and NLP methods to detect and categorize self-acknowledged limitations (e.g., sample size, blinding) reported in randomized controlled trial (RCT) publications.Methods:We created a data model of limitation types in RCT studies and annotated a corpus of 200 full-text RCT publications using this data model. We fine-tuned BERT-based sentence classification models to recognize the limitation sentences and their types. To address the small size of the annotated corpus, we experimented with data augmentation approaches, including Easy Data Augmentation (EDA) and Prompt-Based Data Augmentation (PromDA). We applied the best-performing model to a set of about 12K RCT publications to characterize self-acknowledged limitations at larger scale.Results:Our data model consists of 15 categories and 24 sub-categories (e.g., Population and its sub-category DiagnosticCriteria). We annotated 1090 instances of limitation types in 952 sentences (4.8 limitation sentences and 5.5 limitation types per article). A fine-tuned PubMedBERT model for limitation sentence classification improved upon our earlier model by about 1.5 absolute percentage points in F1 score (0.821 vs. 0.8) with statistical significance (). Our best-performing limitation type classification model, PubMedBERT fine-tuning with PromDA (Output View), achieved an F1 score of 0.7, improving upon the vanilla PubMedBERT model by 2.7 percentage points, with statistical significance ().Conclusion:The model could support automated screening tools which can be used by journals to draw the authors’ attention to reporting issues. Automatic extraction of limitations from RCT publications could benefit peer review and evidence synthesis, and support advanced methods to search and aggregate the evidence from the clinical trial literature.
MULTIFILE