BackgroundConfounding bias is a common concern in epidemiological research. Its presence is often determined by comparing exposure effects between univariable- and multivariable regression models, using an arbitrary threshold of a 10% difference to indicate confounding bias. However, many clinical researchers are not aware that the use of this change-in-estimate criterion may lead to wrong conclusions when applied to logistic regression coefficients. This is due to a statistical phenomenon called noncollapsibility, which manifests itself in logistic regression models. This paper aims to clarify the role of noncollapsibility in logistic regression and to provide guidance in determining the presence of confounding bias.MethodsA Monte Carlo simulation study was designed to uncover patterns of confounding bias and noncollapsibility effects in logistic regression. An empirical data example was used to illustrate the inability of the change-in-estimate criterion to distinguish confounding bias from noncollapsibility effects.ResultsThe simulation study showed that, depending on the sign and magnitude of the confounding bias and the noncollapsibility effect, the difference between the effect estimates from univariable- and multivariable regression models may underestimate or overestimate the magnitude of the confounding bias. Because of the noncollapsibility effect, multivariable regression analysis and inverse probability weighting provided different but valid estimates of the confounder-adjusted exposure effect. In our data example, confounding bias was underestimated by the change in estimate due to the presence of a noncollapsibility effect.ConclusionIn logistic regression, the difference between the univariable- and multivariable effect estimate might not only reflect confounding bias but also a noncollapsibility effect. Ideally, the set of confounders is determined at the study design phase and based on subject matter knowledge. To quantify confounding bias, one could compare the unadjusted exposure effect estimate and the estimate from an inverse probability weighted model.
MULTIFILE
Previous research shows that automatic tendency to approach alcohol plays a causal role in problematic alcohol use and can be retrained by Approach Bias Modification (ApBM). ApBM has been shown to be effective for patients diagnosed with alcohol use disorder (AUD) in inpatient treatment. This study aimed to investigate the effectiveness of adding an online ApBM to treatment as usual (TAU) in an outpatient setting compared to receiving TAU with an online placebo training. 139 AUD patients receiving face-to-face or online treatment as usual (TAU) participated in the study. The patients were randomized to an active or placebo version of 8 sessions of online ApBM over a 5-week period. The weekly consumed standard units of alcohol (primary outcome) was measured at pre-and post-training, 3 and 6 months follow-up. Approach tendency was measured pre-and-post ApBM training. No additional effect of ApBM was found on alcohol intake, nor other outcomes such as craving, depression, anxiety, or stress. A significant reduction of the alcohol approach bias was found. This research showed that approach bias retraining in AUD patients in an outpatient treatment setting reduces the tendency to approach alcohol, but this training effect does not translate into a significant difference in alcohol reduction between groups. Explanations for the lack of effects of ApBM on alcohol consumption are treatment goal and severity of AUD. Future ApBM research should target outpatients with an abstinence goal and offer alternative, more user-friendly modes of delivering ApBM training.
Reporting of research findings is often selective. This threatens the validity of the published body of knowledge if the decision to report depends on the nature of the results. The evidence derived from studies on causes and mechanisms underlying selective reporting may help to avoid or reduce reporting bias. Such research should be guided by a theoretical framework of possible causal pathways that lead to reporting bias. We build upon a classification of determinants of selective reporting that we recently developed in a systematic review of the topic. The resulting theoretical framework features four clusters of causes. There are two clusters of necessary causes: (A) motivations (e.g. a preference for particular findings) and (B) means (e.g. a flexible study design). These two combined represent a sufficient cause for reporting bias to occur. The framework also features two clusters of component causes: (C) conflicts and balancing of interests referring to the individual or the team, and (D) pressures from science and society. The component causes may modify the effect of the necessary causes or may lead to reporting bias mediated through the necessary causes. Our theoretical framework is meant to inspire further research and to create awareness among researchers and end-users of research about reporting bias and its causes.