With summaries in Dutch, Esperanto and English. DOI: 10.4233/uuid:d7132920-346e-47c6-b754-00dc5672b437 "The subject of this study is deformation analysis of the earth's surface (or part of it) and spatial objects on, above or below it. Such analyses are needed in many domains of society. Geodetic deformation analysis uses various types of geodetic measurements to substantiate statements about changes in geometric positions.Professional practice, e.g. in the Netherlands, regularly applies methods for geodetic deformation analysis that have shortcomings, e.g. because the methods apply substandard analysis models or defective testing methods. These shortcomings hamper communication about the results of deformation analyses with the various parties involved. To improve communication solid analysis models and a common language have to be used, which requires standardisation.Operational demands for geodetic deformation analysis are the reason to formulate in this study seven characteristic elements that a solid analysis model needs to possess. Such a model can handle time series of several epochs. It analyses only size and form, not position and orientation of the reference system; and datum points may be under influence of deformation. The geodetic and physical models are combined in one adjustment model. Full use is made of available stochastic information. Statistical testing and computation of minimal detectable deformations is incorporated. Solution methods can handle rank deficient matrices (both model matrix and cofactor matrix). And, finally, a search for the best hypothesis/model is implemented. Because a geodetic deformation analysis model with all seven elements does not exist, this study develops such a model.For effective standardisation geodetic deformation analysis models need: practical key performance indicators; a clear procedure for using the model; and the possibility to graphically visualise the estimated deformations."
BackgroundConfounding bias is a common concern in epidemiological research. Its presence is often determined by comparing exposure effects between univariable- and multivariable regression models, using an arbitrary threshold of a 10% difference to indicate confounding bias. However, many clinical researchers are not aware that the use of this change-in-estimate criterion may lead to wrong conclusions when applied to logistic regression coefficients. This is due to a statistical phenomenon called noncollapsibility, which manifests itself in logistic regression models. This paper aims to clarify the role of noncollapsibility in logistic regression and to provide guidance in determining the presence of confounding bias.MethodsA Monte Carlo simulation study was designed to uncover patterns of confounding bias and noncollapsibility effects in logistic regression. An empirical data example was used to illustrate the inability of the change-in-estimate criterion to distinguish confounding bias from noncollapsibility effects.ResultsThe simulation study showed that, depending on the sign and magnitude of the confounding bias and the noncollapsibility effect, the difference between the effect estimates from univariable- and multivariable regression models may underestimate or overestimate the magnitude of the confounding bias. Because of the noncollapsibility effect, multivariable regression analysis and inverse probability weighting provided different but valid estimates of the confounder-adjusted exposure effect. In our data example, confounding bias was underestimated by the change in estimate due to the presence of a noncollapsibility effect.ConclusionIn logistic regression, the difference between the univariable- and multivariable effect estimate might not only reflect confounding bias but also a noncollapsibility effect. Ideally, the set of confounders is determined at the study design phase and based on subject matter knowledge. To quantify confounding bias, one could compare the unadjusted exposure effect estimate and the estimate from an inverse probability weighted model.
MULTIFILE
This method paper presents a template solution for text mining of scientific literature using the R tm package. Literature to be analyzed can be collected manually or automatically using the code provided with this paper. Once the literature is collected, the three steps for conducting text mining can be performed as outlined below:• loading and cleaning of text from articles,• processing, statistical analysis, and clustering, and• presentation of results using generalized and tailor-made visualizations.The text mining steps can be applied to a single, multiple, or time series groups of documents.References are provided to three published peer reviewed articles that use the presented text mining methodology. The main advantages of our method are: (1) Its suitability for both research and educational purposes, (2) Compliance with the Findable Accessible Interoperable and Reproducible (FAIR) principles, and (3) code and example data are made available on GitHub under the open-source Apache V2 license.