Background and purpose: Automatic approaches are widely implemented to automate dose optimization in radiotherapy treatment planning. This study systematically investigates how to configure automatic planning in order to create the best possible plans. Materials and methods: Automatic plans were generated using protocol based automatic iterative optimization. Starting from a simple automation protocol which consisted of the constraints for targets and organs at risk (OAR), the performance of the automatic approach was evaluated in terms of target coverage, OAR sparing, conformity, beam complexity, and plan quality. More complex protocols were systematically explored to improve the quality of the automatic plans. The protocols could be improved by adding a dose goal on the outer 2 mm of the PTV, by setting goals on strategically chosen subparts of OARs, by adding goals for conformity, and by limiting the leaf motion. For prostate plans, development of an automated post-optimization procedure was required to achieve precise control over the dose distribution. Automatic and manually optimized plans were compared for 20 head and neck (H&N), 20 prostate, and 20 rectum cancer patients. Results: Based on simple automation protocols, the automatic optimizer was not always able to generate adequate treatment plans. For the improved final configurations for the three sites, the dose was lower in automatic plans compared to the manual plans in 12 out of 13 considered OARs. In blind tests, the automatic plans were preferred in 80% of cases. Conclusions: With adequate, advanced, protocols the automatic planning approach is able to create high-quality treatment plans.
DOCUMENT
Background: Manual muscle mass assessment based on Computed Tomography (CT) scans is recognized as a good marker for malnutrition, sarcopenia, and adverse outcomes. However, manual muscle mass analysis is cumbersome and time consuming. An accurate fully automated method is needed. In this study, we evaluate if manual psoas annotation can be substituted by a fully automatic deep learning-based method.Methods: This study included a cohort of 583 patients with severe aortic valve stenosis planned to undergo Transcatheter Aortic Valve Replacement (TAVR). Psoas muscle area was annotated manually on the CT scan at the height of lumbar vertebra 3 (L3). The deep learning-based method mimics this approach by first determining the L3 level and subsequently segmenting the psoas at that level. The fully automatic approach was evaluated as well as segmentation and slice selection, using average bias 95% limits of agreement, Intraclass Correlation Coefficient (ICC) and within-subject Coefficient of Variation (CV). To evaluate performance of the slice selection visual inspection was performed. To evaluate segmentation Dice index was computed between the manual and automatic segmentations (0 = no overlap, 1 = perfect overlap).Results: Included patients had a mean age of 81 ± 6 and 45% was female. The fully automatic method showed a bias and limits of agreement of -0.69 [-6.60 to 5.23] cm2, an ICC of 0.78 [95% CI: 0.74-0.82] and a within-subject CV of 11.2% [95% CI: 10.2-12.2]. For slice selection, 84% of the selections were on the same vertebra between methods, bias and limits of agreement was 3.4 [-24.5 to 31.4] mm. The Dice index for segmentation was 0.93 ± 0.04, bias and limits of agreement was -0.55 [1.71-2.80] cm2.Conclusion: Fully automatic assessment of psoas muscle area demonstrates accurate performance at the L3 level in CT images. It is a reliable tool that offers great opportunities for analysis in large scale studies and in clinical applications.
DOCUMENT
Research into automatic text simplification aims to promote access to information for all members of society. To facilitate generalizability, simplification research often abstracts away from specific use cases, and targets a prototypical reader and an underspecified content creator. In this paper, we consider a real-world use case – simplification technology for use in Dutch municipalities – and identify the needs of the content creators and the target audiences in this scenario. The stakeholders envision a system that (a) assists the human writer without taking over the task; (b) provides diverse outputs, tailored for specific target audiences; and (c) explains the suggestions that it outputs. These requirements call for technology that is characterized by modularity, explainability, and variability. We argue that these are important research directions that require further exploration
MULTIFILE
The objective of this study was to determine if a 3-dimensional computer vision automatic locomotion scoring (3D-ALS) method was able to outperform human observers for classifying cows as lame or nonlame and for detecting cows affected and nonaffected by specific type(s) of hoof lesion. Data collection was carried out in 2 experimental sessions (5 months apart).
MULTIFILE
This paper examines how a serious game approach could support a participatory planning process by bringing stakeholders together to discuss interventions that assist the development of sustainable urban tourism. A serious policy game was designed and played in six European cities by a total of 73 participants, reflecting a diverse array of tourism stakeholders. By observing in-game experiences, a pre- and post -game survey and short interviews six months after playing the game, the process and impact of the game was investigated. While it proved difficult to evaluate the value of a serious game approach, results demonstrate that enacting real-life policymaking in a serious game setting can enable stakeholders to come together, and become more aware of the issues and complexities involved with urban tourism planning. This suggests a serious game can be used to stimulate the uptake of academic insights in a playful manner. However, it should be remembered that a game is a tool and does not, in itself, lead to inclusive participatory policymaking and more sustainable urban tourism planning. Consequently, care needs to be taken to ensure inclusiveness and prevent marginalization or disempowerment both within game-design and the political formation of a wider participatory planning approach.
MULTIFILE
Background: Transmural palliative care interventions aim to identify older persons with palliative care needs and timely provide advance care planning, symptom management, and coordination of care. Nurses can have an important role in these interventions; however, their expertise is currently underused. A new transmural care pathway with a central role for the community care registered nurse in advance care planning aims to contribute to the quality of palliative care for older persons. Objective: To examine the perspectives of community nurses on the feasibility of a new transmural care pathway for advance care planning for older persons. Design: A qualitative study design using semi-structured interviews. Setting(s): Interviews were performed with community nurses of three participating homecare organizations in the Netherlands between March and May 2023. Participants: 19 community nurses. Methods: A topic guide was based on (1) challenges in advance care planning identified from the literature and (2) concepts that are important in assessing the feasibility of complex healthcare interventions provided by the Normalisation Process Theory framework. A combined inductive and deductive thematic analysis was performed. Results: Four themes were identified: views on the transmural care pathway, community nurses’ needs to fulfil their role, key points regarding implementation, and evaluation of the new practice. In general, community nurses were positive about the feasibility of the new practice as it provided a more structured work process that could facilitate interprofessional collaboration and improve the quality of palliative care. Overall, the feasibility of the new practice, from community nurses perspective, was determined by (1) clear roles and responsibilities in the transmural care pathway, (2) standardized registration of advance care planning, and (3) close involvement of community nurses in the whole implementation process. Conclusions: We highlighted important factors, from the perspectives of community nurses, that need to be considered in the implementation of a new transmural care pathway for advance care planning. A clear division of roles and responsibilities, standardized registration of advance care planning, and involvement of community nurses during the whole implementation process were mentioned as important enabling factors. This knowledge might contribute to successful implementation of a transmural care pathway that aims to enhance the quality of palliative care for older persons. Tweetable abstract: Community nurses’ perspectives on the feasibility of a transmural care pathway for advance care planning for older persons.
DOCUMENT
Retail industry consists of the establishment of selling consumer goods (i.e. technology, pharmaceuticals, food and beverages, apparels and accessories, home improvement etc.) and services (i.e. specialty and movies) to customers through multiple channels of distribution including both the traditional brickand-mortar and online retailing. Managing corporate reputation of retail companies is crucial as it has many advantages, for instance, it has been proven to impact generated revenues (Wang et al., 2016). But, in order to be able to manage corporate reputation, one has to be able to measure it, or, nowadays even better, listen to relevant social signals that are out there on the public web. One of the most extensive and widely used frameworks for measuring corporate reputation is through conducting elaborated surveys with respective stakeholders (Fombrun et al., 2015). This approach is valuable but deemed to be laborious and resource-heavy and will not allow to generate automatic alerts and quick and live insights that are extremely needed in this era of internet. For these purposes a social listening approach is needed that can be tailored to online data such as consumer reviews as the main data source. Online review datasets are a form of electronic Word-of-Mouth (WOM) that, when a data source is picked that is relevant to retail, commonly contain relevant information about customers’ perceptions regarding products (Pookulangara, 2011) and that are massively available. The algorithm that we have built in our application provides retailers with reputation scores for all variables that are deemed to be relevant to retail in the model of Fombrun et al. (2015). Examples of such variables for products and services are high quality, good value, stands behind, and meets customer needs. We propose a new set of subvariables with which these variables can be operationalized for retail in particular. Scores are being calculated using proportions of positive opinion pairs such as <fast, delivery> or <rude, staff> that have been designed per variable. With these important insights extracted, companies can act accordingly and proceed to improve their corporate reputation. It is important to emphasize that, once the design is complete and implemented, all processing can be performed completely automatic and unsupervised. The application makes use of a state of the art aspect-based sentiment analysis (ABSA) framework because of ABSA’s ability to generate sentiment scores for all relevant variables and aspects. Since most online data is in open form and we deliberately want to avoid labelling any data by human experts, the unsupervised aspectator algorithm has been picked. It employs a lexicon to calculate sentiment scores and uses syntactic dependency paths to discover candidate aspects (Bancken et al., 2014). We have applied our approach to a large number of online review datasets that we sampled from a list of 50 top global retailers according to National Retail Federation (2020), including both offline and online operation, and that we scraped from trustpilot, a public website that is well-known to retailers. The algorithm has carefully been evaluated by manually annotating a randomly sampled subset of the datasets for validation purposes by two independent annotators. The Kappa’s score on this subset was 80%.
MULTIFILE
When using autonomous reconfigurable manufacturing system, that offers generic services, there is the possibility to dynamically manufacture a range of products using the same manufacturing equipment. Opportunities are created to optimally scale the production using reconfiguration means and automatically manufacture small amounts of unique or highly customizable products. Basically the result is a short time to market for new products. This paper discusses the problems that arise when manufacturing systems are reconfigured and the impact of this action on the entire system. The proposed software architecture and tooling makes it possible to quickly reconfigure a system without interference to other system, and shows how the reconfigured hardware can be controlled without the need to reprogram the software. Parameters that are required to control the new hardware can be added using a simple tool. As a result reconfiguration is simplified and can be achieved quickly by mechanics without reprogramming any systems. The impact is that time to market can be reduced and manufacturing systems can quickly be adapted to current real-time needs.
DOCUMENT
Objective:Acknowledging study limitations in a scientific publication is a crucial element in scientific transparency and progress. However, limitation reporting is often inadequate. Natural language processing (NLP) methods could support automated reporting checks, improving research transparency. In this study, our objective was to develop a dataset and NLP methods to detect and categorize self-acknowledged limitations (e.g., sample size, blinding) reported in randomized controlled trial (RCT) publications.Methods:We created a data model of limitation types in RCT studies and annotated a corpus of 200 full-text RCT publications using this data model. We fine-tuned BERT-based sentence classification models to recognize the limitation sentences and their types. To address the small size of the annotated corpus, we experimented with data augmentation approaches, including Easy Data Augmentation (EDA) and Prompt-Based Data Augmentation (PromDA). We applied the best-performing model to a set of about 12K RCT publications to characterize self-acknowledged limitations at larger scale.Results:Our data model consists of 15 categories and 24 sub-categories (e.g., Population and its sub-category DiagnosticCriteria). We annotated 1090 instances of limitation types in 952 sentences (4.8 limitation sentences and 5.5 limitation types per article). A fine-tuned PubMedBERT model for limitation sentence classification improved upon our earlier model by about 1.5 absolute percentage points in F1 score (0.821 vs. 0.8) with statistical significance (). Our best-performing limitation type classification model, PubMedBERT fine-tuning with PromDA (Output View), achieved an F1 score of 0.7, improving upon the vanilla PubMedBERT model by 2.7 percentage points, with statistical significance ().Conclusion:The model could support automated screening tools which can be used by journals to draw the authors’ attention to reporting issues. Automatic extraction of limitations from RCT publications could benefit peer review and evidence synthesis, and support advanced methods to search and aggregate the evidence from the clinical trial literature.
MULTIFILE