PURPOSE: Advanced radiotherapy treatments require appropriate quality assurance (QA) to verify 3D dose distributions. Moreover, increase in patient numbers demand efficient QA-methods. In this study, a time efficient method that combines model-based QA and measurement-based QA was developed; i.e., the hybrid-QA. The purpose of this study was to determine the reliability of the model-based QA and to evaluate time efficiency of the hybrid-QA method.METHODS: Accuracy of the model-based QA was determined by comparison of COMPASS calculated dose with Monte Carlo calculations for heterogeneous media. In total, 330 intensity modulated radiation therapy (IMRT) treatment plans were evaluated based on the mean gamma index (GI) with criteria of 3%∕3mm and classification of PASS (GI ≤ 0.4), EVAL (0.4 < GI > 0.6), and FAIL (GI ≥ 0.6). Agreement between model-based QA and measurement-based QA was determined for 48 treatment plans, and linac stability was verified for 15 months. Finally, time efficiency improvement of the hybrid-QA was quantified for four representative treatment plans.RESULTS: COMPASS calculated dose was in agreement with Monte Carlo dose, with a maximum error of 3.2% in heterogeneous media with high density (2.4 g∕cm(3)). Hybrid-QA results for IMRT treatment plans showed an excellent PASS rate of 98% for all cases. Model-based QA was in agreement with measurement-based QA, as shown by a minimal difference in GI of 0.03 ± 0.08. Linac stability was high with an average GI of 0.28 ± 0.04. The hybrid-QA method resulted in a time efficiency improvement of 15 min per treatment plan QA compared to measurement-based QA.CONCLUSIONS: The hybrid-QA method is adequate for efficient and accurate 3D dose verification. It combines time efficiency of model-based QA with reliability of measurement-based QA and is suitable for implementation within any radiotherapy department.
We propose a novel deception detection system based on Rapid Serial Visual Presentation (RSVP). One motivation for the new method is to present stimuli on the fringe of awareness, such that it is more difficult for deceivers to confound the deception test using countermeasures. The proposed system is able to detect identity deception (by using the first names of participants) with a 100% hit rate (at an alpha level of 0.05). To achieve this, we extended the classic Event-Related Potential (ERP) techniques (such as peak-to-peak) by applying Randomisation, a form of Monte Carlo resampling, which we used to detect deception at an individual level. In order to make the deployment of the system simple and rapid, we utilised data from three electrodes only: Fz, Cz and Pz. We then combined data from the three electrodes using Fisher's method so that each participant was assigned a single p-value, which represents the combined probability that a specific participant was being deceptive. We also present subliminal salience search as a general method to determine what participants find salient by detecting breakthrough into conscious awareness using EEG.
BackgroundConfounding bias is a common concern in epidemiological research. Its presence is often determined by comparing exposure effects between univariable- and multivariable regression models, using an arbitrary threshold of a 10% difference to indicate confounding bias. However, many clinical researchers are not aware that the use of this change-in-estimate criterion may lead to wrong conclusions when applied to logistic regression coefficients. This is due to a statistical phenomenon called noncollapsibility, which manifests itself in logistic regression models. This paper aims to clarify the role of noncollapsibility in logistic regression and to provide guidance in determining the presence of confounding bias.MethodsA Monte Carlo simulation study was designed to uncover patterns of confounding bias and noncollapsibility effects in logistic regression. An empirical data example was used to illustrate the inability of the change-in-estimate criterion to distinguish confounding bias from noncollapsibility effects.ResultsThe simulation study showed that, depending on the sign and magnitude of the confounding bias and the noncollapsibility effect, the difference between the effect estimates from univariable- and multivariable regression models may underestimate or overestimate the magnitude of the confounding bias. Because of the noncollapsibility effect, multivariable regression analysis and inverse probability weighting provided different but valid estimates of the confounder-adjusted exposure effect. In our data example, confounding bias was underestimated by the change in estimate due to the presence of a noncollapsibility effect.ConclusionIn logistic regression, the difference between the univariable- and multivariable effect estimate might not only reflect confounding bias but also a noncollapsibility effect. Ideally, the set of confounders is determined at the study design phase and based on subject matter knowledge. To quantify confounding bias, one could compare the unadjusted exposure effect estimate and the estimate from an inverse probability weighted model.
MULTIFILE