BackgroundConfounding bias is a common concern in epidemiological research. Its presence is often determined by comparing exposure effects between univariable- and multivariable regression models, using an arbitrary threshold of a 10% difference to indicate confounding bias. However, many clinical researchers are not aware that the use of this change-in-estimate criterion may lead to wrong conclusions when applied to logistic regression coefficients. This is due to a statistical phenomenon called noncollapsibility, which manifests itself in logistic regression models. This paper aims to clarify the role of noncollapsibility in logistic regression and to provide guidance in determining the presence of confounding bias.MethodsA Monte Carlo simulation study was designed to uncover patterns of confounding bias and noncollapsibility effects in logistic regression. An empirical data example was used to illustrate the inability of the change-in-estimate criterion to distinguish confounding bias from noncollapsibility effects.ResultsThe simulation study showed that, depending on the sign and magnitude of the confounding bias and the noncollapsibility effect, the difference between the effect estimates from univariable- and multivariable regression models may underestimate or overestimate the magnitude of the confounding bias. Because of the noncollapsibility effect, multivariable regression analysis and inverse probability weighting provided different but valid estimates of the confounder-adjusted exposure effect. In our data example, confounding bias was underestimated by the change in estimate due to the presence of a noncollapsibility effect.ConclusionIn logistic regression, the difference between the univariable- and multivariable effect estimate might not only reflect confounding bias but also a noncollapsibility effect. Ideally, the set of confounders is determined at the study design phase and based on subject matter knowledge. To quantify confounding bias, one could compare the unadjusted exposure effect estimate and the estimate from an inverse probability weighted model.
MULTIFILE
Background: Advanced statistical modeling techniques may help predict health outcomes. However, it is not the case that these modeling techniques always outperform traditional techniques such as regression techniques. In this study, external validation was carried out for five modeling strategies for the prediction of the disability of community-dwelling older people in the Netherlands. Methods: We analyzed data from five studies consisting of community-dwelling older people in the Netherlands. For the prediction of the total disability score as measured with the Groningen Activity Restriction Scale (GARS), we used fourteen predictors as measured with the Tilburg Frailty Indicator (TFI). Both the TFI and the GARS are self-report questionnaires. For the modeling, five statistical modeling techniques were evaluated: general linear model (GLM), support vector machine (SVM), neural net (NN), recursive partitioning (RP), and random forest (RF). Each model was developed on one of the five data sets and then applied to each of the four remaining data sets. We assessed the performance of the models with calibration characteristics, the correlation coefficient, and the root of the mean squared error. Results: The models GLM, SVM, RP, and RF showed satisfactory performance characteristics when validated on the validation data sets. All models showed poor performance characteristics for the deviating data set both for development and validation due to the deviating baseline characteristics compared to those of the other data sets. Conclusion: The performance of four models (GLM, SVM, RP, RF) on the development data sets was satisfactory. This was also the case for the validation data sets, except when these models were developed on the deviating data set. The NN models showed a much worse performance on the validation data sets than on the development data sets.
DOCUMENT
This Whitepaper presents the essence of research into existing and emerging circular business models (CBMs). This results in the identification of seven basic types of CBM, divided into three groups that together form a classification. This Whitepaper consists of three parts.▪ The first part discusses the background and explains the circular economy (CE), the connection with sustainability, business models and an overview of circular business models.▪ In the second part, an overview is given of the developed classification of CBM, and each basic type is described based on its characteristics. This has resulted in seven knowledge maps. Finally, the last two, more future-oriented models are further explained and illustrated.▪ The third part looks back briefly at the reliability of the classification made and then at the aspects of change management in working on and with a CBM.
MULTIFILE
This report describes the Utrecht regio with regard to sustainability and circular business models.
DOCUMENT
The QuickScan CBM (Circular Business Model) offers an approach to develop a circular business model. It focuses primarily on the manufacturing industry, even though it can be used in other sectors. It consists of three parts: (1) an introduction with an explanation of backgrounds and central concepts, (2) knowledge maps of seven business models that together form a classification and (3) the actual QuickScan.An interactive application can be found at Business Model Lab. This last version is bilingual (Dutch and English). Regardless of the version, it can be used to develop a new CBM or adapt an existing business model based on a qualitative approach. The starting point is that better design and organisation of a CBM contributes to the transformation and transition towards a sustainable and circular economy.
MULTIFILE
Psychologists, psycholinguists, and other researchers using language stimuli have been struggling for more than 30 years with the problem of how to analyze experimental data that contain two crossed random effects (items and participants). The classical analysis of variance does not apply; alternatives have been proposed but have failed to catch on, and a statistically unsatisfactory procedure of using two approximations (known as F 1 and F 2) has become the standard. A simple and elegant solution using mixed model analysis has been available for 15 years, and recent improvements in statistical software have made mixed models analysis widely available. The aim of this article is to increase the use of mixed models by giving a concise practical introduction and by giving clear directions for undertaking the analysis in the most popular statistical packages. The article also introduces the djmixed add-on package for SPSS, which makes entering the models and reporting their results as straightforward as possible.
MULTIFILE
Individual and unorganized sports with a health-related focus, such as recreational running, have grown extensively in the last decade. Consistent with this development, there has been an exponential increase in the availability and use of electronic monitoring devices such as smartphone applications (apps) and sports watches. These electronic devices could provide support and monitoring for unorganized runners, who have no access to professional trainers and coaches. The purpose of this paper is to gain insight into the characteristics of event runners who use running-related apps and sports watches. This knowledge is useful from research, design, and marketing perspectives to adequately address unorganized runners’ needs, and to support them in healthy and sustainable running through personalized technology. Data used in this study are drawn from the standardized online Eindhoven Running Survey 2014 (ERS14). In total, 2,172 participants in the Half Marathon Eindhoven 2014 completed the questionnaire (a response rate of 40.0%). Binary logistic regressions were used to analyze the impact of socio-demographic variables, running-related variables, and psychographic characteristics on the use of running-related apps and sports watches. Next, consumer profiles were identified. The results indicate that the use of monitoring devices is affected by socio-demographics as well as sports-related and psychographic variables, and this relationship depends on the type of monitoring device. Therefore, distinctive consumer profiles have been developed to provide a tool for designers and manufacturers of electronic running-related devices to better target (unorganized) runners’ needs through personalized and differentiated approaches. Apps are more likely to be used by younger, less experienced and involved runners. Hence, apps have the potential to target this group of novice, less trained, and unorganized runners. In contrast, sports watches are more likely to be used by a different group of runners, older and more experienced runners with higher involvement. Although apps and sports watches may potentially promote and stimulate sports participation, these electronic devices do require a more differentiated approach to target specific needs of runners. Considerable efforts in terms of personalization and tailoring have to be made to develop the full potential of these electronic devices as drivers for healthy and sustainable sports participation.
DOCUMENT
The need to better understand how to manage the real logistics operations in Schiphol Airport, a strategic hub for the economic development of the Netherlands, created the conditions to develop a project where academia and industry partnered to build a simulation model of the Schiphol Airport Landside operations. This paper presents such a model using discrete-event simulation. Arealistic representation of the open road network of the airport as well as the (un)loading dock capacities and locations of the five ground handlers of Schiphol Airport was developed. Furthermore, to provide practitioners with applicable consolidation and truck-dispatching policies, some easy-to-implement rules are proposed and implemented in the model. Preliminary results from this model show that truck-dispatching policies have a higher impact than consolidation policies in terms of both distances travelled by cooperative logistic operators working within the airport and shipments’ average flow time. Furthermore, the approach presented in this study can be used for studying similar mega-hubs
DOCUMENT
The need to better understand how to manage the real logistics operations in Schiphol Airport, a strategic hub for the economic development of the Netherlands, created the conditions to develop a project where academia and industry partnered to build a simulation model of the Schiphol Airport Landside operations. This paper presents such a model using discrete-event simulation. A realistic representation of the open road network of the airport as well as the (un)loading dock capacities and locations of the five ground handlers of Schiphol Airport was developed. Furthermore, to provide practitioners with applicable consolidation and truck-dispatching policies, some easy-to-implement rules are proposed and implemented in the model. Preliminary results from this model show that truck-dispatching policies have a higher impact than consolidation policies in terms of both distance travelled by cooperative logistic operators working within the airport and shipments’ average flow time. Furthermore, the approach presented in this study can be used for studying similar megahubs.
DOCUMENT
Routine immunization (RI) of children is the most effective and timely public health intervention for decreasing child mortality rates around the globe. Pakistan being a low-and-middle-income-country (LMIC) has one of the highest child mortality rates in the world occurring mainly due to vaccine-preventable diseases (VPDs). For improving RI coverage, a critical need is to establish potential RI defaulters at an early stage, so that appropriate interventions can be targeted towards such population who are identified to be at risk of missing on their scheduled vaccine uptakes. In this paper, a machine learning (ML) based predictive model has been proposed to predict defaulting and non-defaulting children on upcoming immunization visits and examine the effect of its underlying contributing factors. The predictive model uses data obtained from Paigham-e-Sehat study having immunization records of 3,113 children. The design of predictive model is based on obtaining optimal results across accuracy, specificity, and sensitivity, to ensure model outcomes remain practically relevant to the problem addressed. Further optimization of predictive model is obtained through selection of significant features and removing data bias. Nine machine learning algorithms were applied for prediction of defaulting children for the next immunization visit. The results showed that the random forest model achieves the optimal accuracy of 81.9% with 83.6% sensitivity and 80.3% specificity. The main determinants of vaccination coverage were found to be vaccine coverage at birth, parental education, and socio-economic conditions of the defaulting group. This information can assist relevant policy makers to take proactive and effective measures for developing evidence based targeted and timely interventions for defaulting children.
MULTIFILE