BackgroundConfounding bias is a common concern in epidemiological research. Its presence is often determined by comparing exposure effects between univariable- and multivariable regression models, using an arbitrary threshold of a 10% difference to indicate confounding bias. However, many clinical researchers are not aware that the use of this change-in-estimate criterion may lead to wrong conclusions when applied to logistic regression coefficients. This is due to a statistical phenomenon called noncollapsibility, which manifests itself in logistic regression models. This paper aims to clarify the role of noncollapsibility in logistic regression and to provide guidance in determining the presence of confounding bias.MethodsA Monte Carlo simulation study was designed to uncover patterns of confounding bias and noncollapsibility effects in logistic regression. An empirical data example was used to illustrate the inability of the change-in-estimate criterion to distinguish confounding bias from noncollapsibility effects.ResultsThe simulation study showed that, depending on the sign and magnitude of the confounding bias and the noncollapsibility effect, the difference between the effect estimates from univariable- and multivariable regression models may underestimate or overestimate the magnitude of the confounding bias. Because of the noncollapsibility effect, multivariable regression analysis and inverse probability weighting provided different but valid estimates of the confounder-adjusted exposure effect. In our data example, confounding bias was underestimated by the change in estimate due to the presence of a noncollapsibility effect.ConclusionIn logistic regression, the difference between the univariable- and multivariable effect estimate might not only reflect confounding bias but also a noncollapsibility effect. Ideally, the set of confounders is determined at the study design phase and based on subject matter knowledge. To quantify confounding bias, one could compare the unadjusted exposure effect estimate and the estimate from an inverse probability weighted model.
MULTIFILE
Robots are increasingly used in a variety of work environments, but surprisingly little attention has been paid to how robots change work. In this comparative case study, we explore how robotization changed the work design of order pickers and order packers in eight logistic warehouses. We found that all warehouses robotized tasks based on technological functionality to increase efficiency, which sometimes created jobs consisting of ‘left-over tasks’. Only two warehouses used a bottom-up approach, where employees were involved in the implementation and quality of work was considered important. Although the other warehouses did not, sometimes their work design still benefitted from robotization. The positive effects we identified are reduced physical and cognitive demands and opportunities for upskilling. Warehouses that lack attention to the quality of work may risk ending up with the negative effects for employees, such as simplification and intensification of work, and reduced autonomy. We propose that understanding the consequences of robots on work design supports HR professionals to help managing this transition by both giving relevant input on a strategic level about the importance of work design and advocating for employees and their involvement.
DOCUMENT
An overview of innovations in a particular area, for example retail developments in the fashion sector (Van Vliet, 2014), and a subsequent discussion about the probability as to whether these innovations will realise a ‘breakthrough’, has to be supplemented with the question of what the added value is for the customer of such a new service or product. The added value for the customer must not only be clear as to its direct (instrumental or hedonic) incentives but it must also be tested on its merits from a business point of view. This requires a methodology. Working with business models is a method for describing the added value of products/services for customers in a systematic and structured manner. The fact that this is not always simple is evident from the discussions about retail developments, which do not excel in well-grounded business models. If there is talk about business models at all, it is more likely to concern strategic positioning in the market or value chain, or the discussion is about specifics like earning- and distribution-models (see Molenaar, 2011; Shopping 2020, 2014). Here we shall deal with two aspects of business models. First of all we shall look at the different perspectives in the use of business models, ultimately arriving at four distinctive perspectives or methods of use. Secondly, we shall outline the context within which business models operate. As a conclusion we shall distil a research framework from these discussions by presenting an integrated model as the basis for further research into new services and product.
DOCUMENT
Multilevel models using logistic regression (MLogRM) and random forest models (RFM) are increasingly deployed in industry for the purpose of binary classification. The European Commission’s proposed Artificial Intelligence Act (AIA) necessitates, under certain conditions, that application of such models is fair, transparent, and ethical, which consequently implies technical assessment of these models. This paper proposes and demonstrates an audit framework for technical assessment of RFMs and MLogRMs by focussing on model-, discrimination-, and transparency & explainability-related aspects. To measure these aspects 20 KPIs are proposed, which are paired to a traffic light risk assessment method. An open-source dataset is used to train a RFM and a MLogRM model and these KPIs are computed and compared with the traffic lights. The performance of popular explainability methods such as kernel- and tree-SHAP are assessed. The framework is expected to assist regulatory bodies in performing conformity assessments of binary classifiers and also benefits providers and users deploying such AI-systems to comply with the AIA.
DOCUMENT
The need to better understand how to manage the real logistics operations in Schiphol Airport, a strategic hub for the economic development of the Netherlands, created the conditions to develop a project where academia and industry partnered to build a simulation model of the Schiphol Airport Landside operations. This paper presents such a model using discrete-event simulation. A realistic representation of the open road network of the airport as well as the (un)loading dock capacities and locations of the five ground handlers of Schiphol Airport was developed. Furthermore, to provide practitioners with applicable consolidation and truck-dispatching policies, some easy-to-implement rules are proposed and implemented in the model. Preliminary results from this model show that truck-dispatching policies have a higher impact than consolidation policies in terms of both distance travelled by cooperative logistic operators working within the airport and shipments’ average flow time. Furthermore, the approach presented in this study can be used for studying similar megahubs.
DOCUMENT
Background: Advanced statistical modeling techniques may help predict health outcomes. However, it is not the case that these modeling techniques always outperform traditional techniques such as regression techniques. In this study, external validation was carried out for five modeling strategies for the prediction of the disability of community-dwelling older people in the Netherlands. Methods: We analyzed data from five studies consisting of community-dwelling older people in the Netherlands. For the prediction of the total disability score as measured with the Groningen Activity Restriction Scale (GARS), we used fourteen predictors as measured with the Tilburg Frailty Indicator (TFI). Both the TFI and the GARS are self-report questionnaires. For the modeling, five statistical modeling techniques were evaluated: general linear model (GLM), support vector machine (SVM), neural net (NN), recursive partitioning (RP), and random forest (RF). Each model was developed on one of the five data sets and then applied to each of the four remaining data sets. We assessed the performance of the models with calibration characteristics, the correlation coefficient, and the root of the mean squared error. Results: The models GLM, SVM, RP, and RF showed satisfactory performance characteristics when validated on the validation data sets. All models showed poor performance characteristics for the deviating data set both for development and validation due to the deviating baseline characteristics compared to those of the other data sets. Conclusion: The performance of four models (GLM, SVM, RP, RF) on the development data sets was satisfactory. This was also the case for the validation data sets, except when these models were developed on the deviating data set. The NN models showed a much worse performance on the validation data sets than on the development data sets.
DOCUMENT
The world is on the verge of the fourth industrial revolution that will considerably influence society and human life. Today human being is surrounded by technological advancement and every day we face new sophisticated technological systems that affect our daily lives. The business environment is being influenced by Industry 4.0 significantly and a massive transformation in labour market can be observed. The digital economy has become a disruptive factor in several sectors and it has shown a major impact on the logistic industry in terms of workforce transformation. The question that arises is that to what extent the logistic sector is ready for the digital transformation in Industry 4.0 and what factors should be considered by industry players, governments and multi-stakeholders in order to simplify workforce transformation. This study followed a qualitative approach using Grounded Theory to explain the phenomenon of workforce transformation within the logistic sector in Industry 4.0. Furthermore, a literature review was used to explain the role of human resource management in simplification of this process .The findings show that there is a lack of adequate awareness about the impact of the digital transformation on labour. Furthermore, it discusses the role of human resource management as an agent of change in Industry 4.0. The current research presents recommendations for different stakeholders on how to prepare the current and future workforce for the upcoming changes.This study is significant in the sense that it will add to the existing literature and provide practitioners with vital information that can be used to simplify the digital transformation of logistic industry by preparing labor market.
MULTIFILE
With the proliferation of misinformation on the web, automatic misinformation detection methods are becoming an increasingly important subject of study. Large language models have produced the best results among content-based methods, which rely on the text of the article rather than the metadata or network features. However, finetuning such a model requires significant training data, which has led to the automatic creation of large-scale misinformation detection datasets. In these datasets, articles are not labelled directly. Rather, each news site is labelled for reliability by an established fact-checking organisation and every article is subsequently assigned the corresponding label based on the reliability score of the news source in question. A recent paper has explored the biases present in one such dataset, NELA-GT-2018, and shown that the models are at least partly learning the stylistic and other features of different news sources rather than the features of unreliable news. We confirm a part of their findings. Apart from studying the characteristics and potential biases of the datasets, we also find it important to examine in what way the model architecture influences the results. We therefore explore which text features or combinations of features are learned by models based on contextual word embeddings as opposed to basic bag-of-words models. To elucidate this, we perform extensive error analysis aided by the SHAP post-hoc explanation technique on a debiased portion of the dataset. We validate the explanation technique on our inherently interpretable baseline model.
DOCUMENT
This Whitepaper presents the essence of research into existing and emerging circular business models (CBMs). This results in the identification of seven basic types of CBM, divided into three groups that together form a classification. This Whitepaper consists of three parts.▪ The first part discusses the background and explains the circular economy (CE), the connection with sustainability, business models and an overview of circular business models.▪ In the second part, an overview is given of the developed classification of CBM, and each basic type is described based on its characteristics. This has resulted in seven knowledge maps. Finally, the last two, more future-oriented models are further explained and illustrated.▪ The third part looks back briefly at the reliability of the classification made and then at the aspects of change management in working on and with a CBM.
MULTIFILE
This report describes the Utrecht regio with regard to sustainability and circular business models.
DOCUMENT