Background: INTELLiVENT-adaptive support ventilation (ASV) is an automated closed-loop mode of invasive ventilation for use in critically ill patients. INTELLiVENT-ASV automatically adjusts, without the intervention of the caregiver, ventilator settings to achieve the lowest work and force of breathing. Aims: The aim of this case series is to describe the specific adjustments of INTELLiVENT-ASV in patients with acute hypoxemic respiratory failure, who were intubated for invasive ventilation. Study design: We describe three patients with severe acute respiratory distress syndrome (ARDS) because of COVID-19 who received invasive ventilation in our intensive care unit (ICU) in the first year of the COVID-19 pandemic. Results: INTELLiVENT-ASV could be used successfully, but only after certain adjustments in the settings of the ventilator. Specifically, the high oxygen targets that are automatically chosen by INTELLiVENT-ASV when the lung condition ‘ARDS’ is ticked had to be lowered, and the titration ranges for positive end expiratory pressure (PEEP) and inspired oxygen fraction (FiO2) had to be narrowed. Conclusions: The challenges taught us how to adjust the ventilator settings so that INTELLiVENT-ASV could be used in successive COVID-19 ARDS patients, and we experienced the benefits of this closed-loop ventilation in clinical practice. Relevance to clinical practice: INTELLiVENT-ASV is attractive to use in clinical practice. It is safe and effective in providing lung-protective ventilation. A closely observing user always remains needed. INTELLiVENT-ASV has a strong potential to reduce the workload associated with ventilation because of the automated adjustments.
Objective:Acknowledging study limitations in a scientific publication is a crucial element in scientific transparency and progress. However, limitation reporting is often inadequate. Natural language processing (NLP) methods could support automated reporting checks, improving research transparency. In this study, our objective was to develop a dataset and NLP methods to detect and categorize self-acknowledged limitations (e.g., sample size, blinding) reported in randomized controlled trial (RCT) publications.Methods:We created a data model of limitation types in RCT studies and annotated a corpus of 200 full-text RCT publications using this data model. We fine-tuned BERT-based sentence classification models to recognize the limitation sentences and their types. To address the small size of the annotated corpus, we experimented with data augmentation approaches, including Easy Data Augmentation (EDA) and Prompt-Based Data Augmentation (PromDA). We applied the best-performing model to a set of about 12K RCT publications to characterize self-acknowledged limitations at larger scale.Results:Our data model consists of 15 categories and 24 sub-categories (e.g., Population and its sub-category DiagnosticCriteria). We annotated 1090 instances of limitation types in 952 sentences (4.8 limitation sentences and 5.5 limitation types per article). A fine-tuned PubMedBERT model for limitation sentence classification improved upon our earlier model by about 1.5 absolute percentage points in F1 score (0.821 vs. 0.8) with statistical significance (). Our best-performing limitation type classification model, PubMedBERT fine-tuning with PromDA (Output View), achieved an F1 score of 0.7, improving upon the vanilla PubMedBERT model by 2.7 percentage points, with statistical significance ().Conclusion:The model could support automated screening tools which can be used by journals to draw the authors’ attention to reporting issues. Automatic extraction of limitations from RCT publications could benefit peer review and evidence synthesis, and support advanced methods to search and aggregate the evidence from the clinical trial literature.
MULTIFILE