The rising rate of preprints and publications, combined with persistent inadequate reporting practices and problems with study design and execution, have strained the traditional peer review system. Automated screening tools could potentially enhance peer review by helping authors, journal editors, and reviewers to identify beneficial practices and common problems in preprints or submitted manuscripts. Tools can screen many papers quickly, and may be particularly helpful in assessing compliance with journal policies and with straightforward items in reporting guidelines. However, existing tools cannot understand or interpret the paper in the context of the scientific literature. Tools cannot yet determine whether the methods used are suitable to answer the research question, or whether the data support the authors’ conclusions. Editors and peer reviewers are essential for assessing journal fit and the overall quality of a paper, including the experimental design, the soundness of the study’s conclusions, potential impact and innovation. Automated screening tools cannot replace peer review, but may aid authors, reviewers, and editors in improving scientific papers. Strategies for responsible use of automated tools in peer review may include setting performance criteria for tools, transparently reporting tool performance and use, and training users to interpret reports.
DOCUMENT
BackgroundA key factor in successfully preventing falls, is early identification of elderly with a high risk of falling. However, currently there is no easy-to-use pre-screening tool available; current tools are either not discriminative, time-consuming and/or costly. This pilot investigates the feasibility of developing an automatic gait-screening method by using a low-cost optical sensor and machinelearning algorithms to automatically detect features and classify gait patterns.MethodParticipants (n = 204, age 27 ± 7 yrs.) performed a gait test under two conditions: control and with distorted depth perception (induced by wearing special goggles). Each test consisted of 4x 3m walking at comfortable speed. Full-body 3D kinematics were captured using an optical sensor (Microsoft Xbox One Kinect). Tests were conducted in a public space to establish relatively 'natural' conditions. Data was processed in Matlab and common spatiotemporal variables were calculated per gait section. The 3D-time series data of the centre of mass for each section was used as input for a neural network, that was trained to discriminate between the two conditions.ResultsWearing the goggles affected the gait pattern significantly: gait velocity and step length decreased, and lateral sway increased compared to the control condition. A 2-layer neural network could correctly classify 79% of the gait segments (i.e. with or without distorted vision).ConclusionsThe results show that gait patterns of healthy people with distorted vision could automatically be classified with the proposed approach. Future work will focus on adapting this model for identification of specific physical risk-factors in elderly.
DOCUMENT
Active learning has become an increasingly popular method for screening large amounts of data in systematic reviews and meta-analyses. The active learning process continually improves its predictions on the remaining unlabeled records, with the goal of identifying all relevant records as early as possible. However, determining the optimal point at which to stop the active learning process is a challenge. The cost of additional labeling of records by the reviewer must be balanced against the cost of erroneous exclusions. This paper introduces the SAFE procedure, a practical and conservative set of stopping heuristics that offers a clear guideline for determining when to end the active learning process in screening software like ASReview. The eclectic mix of stopping heuristics helps to minimize the risk of missing relevant papers in the screening process. The proposed stopping heuristic balances the costs of continued screening with the risk of missing relevant records, providing a practical solution for reviewers to make informed decisions on when to stop screening. Although active learning can significantly enhance the quality and efficiency of screening, this method may be more applicable to certain types of datasets and problems. Ultimately, the decision to stop the active learning process depends on careful consideration of the trade-off between the costs of additional record labeling against the potential errors of the current model for the specific dataset and context.
MULTIFILE