Estimation of the factor model by unweighted least squares (ULS) is distribution free, yields consistent estimates, and is computationally fast if the Minimum Residuals (MinRes) algorithm is employed. MinRes algorithms produce a converging sequence of monotonically decreasing ULS function values. Various suggestions for algorithms of the MinRes type are made for confirmatory as well as for exploratory factor analysis. These suggestions include the implementation of inequality constraints and the prevention of Heywood cases. A simulation study, comparing the bootstrap standard deviations for the parameters with the standard errors from maximum likelihood, indicates that these are virtually equal when the score vectors are sampled from the normal distribution. Two empirical examples demonstrate the usefulness of constrained exploratory and confirmatory factor analysis by ULS used in conjunction with the bootstrap method.
DOCUMENT
Several models in data analysis are estimated by minimizing the objective function defined as the residual sum of squares between the model and the data. A necessary and sufficient condition for the existence of a least squares estimator is that the objective function attains its infimum at a unique point. It is shown that the objective function for Parafac-2 need not attain its infimum, and that of DEDICOM, constrained Parafac-2, and, under a weak assumption, SCA and Dynamals do attain their infimum. Furthermore, the sequence of parameter vectors, generated by an alternating least squares algorithm, converges if it decreases the objective function to its infimum which is attained at one or finitely many points.
LINK
Author supplied: "This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices and correlation between epochs can be handled. The determination of transformation parameters between two or more coordinate sets, determined by geodetic monitoring measurements, can be handled as a least squares adjustment problem. It can be solved without linearisation of the functional model, if it concerns an affine, similarity or congruence transformation in one-, two- or three-dimensional space. If the functional model describes more than such a transformation, it is hardly ever possible to find a direct solution for the transformation parameters. Linearisation of the functional model and applying least squares formulas is then an appropriate mode of working. The adjustment model is given as a model of observation equations with constraints on the parameters. The starting point is the affine transformation, whose parameters are constrained to get the parameters of the similarity or congruence transformation. In this way the use of Euler angles is avoided. Because the model is linearised, iteration is necessary to get the final solution. In each iteration step approximate coordinates are necessary that fulfil the constraints. For the affine transformation it is easy to get approximate coordinates. For the similarity and congruence transformation the approximate coordinates have to comply to constraints. To achieve this, use is made of the singular value decomposition of the rotation matrix. To show the effectiveness of the proposed adjustment model total station measurements in two epochs of monitored buildings are analysed. Coordinate sets with full, rank deficient covariance matrices are determined from the measurements and adjusted with the proposed model. Testing the adjustment for deformations results in detection of the simulated deformations."
MULTIFILE
With summaries in Dutch, Esperanto and English. DOI: 10.4233/uuid:d7132920-346e-47c6-b754-00dc5672b437 "The subject of this study is deformation analysis of the earth's surface (or part of it) and spatial objects on, above or below it. Such analyses are needed in many domains of society. Geodetic deformation analysis uses various types of geodetic measurements to substantiate statements about changes in geometric positions.Professional practice, e.g. in the Netherlands, regularly applies methods for geodetic deformation analysis that have shortcomings, e.g. because the methods apply substandard analysis models or defective testing methods. These shortcomings hamper communication about the results of deformation analyses with the various parties involved. To improve communication solid analysis models and a common language have to be used, which requires standardisation.Operational demands for geodetic deformation analysis are the reason to formulate in this study seven characteristic elements that a solid analysis model needs to possess. Such a model can handle time series of several epochs. It analyses only size and form, not position and orientation of the reference system; and datum points may be under influence of deformation. The geodetic and physical models are combined in one adjustment model. Full use is made of available stochastic information. Statistical testing and computation of minimal detectable deformations is incorporated. Solution methods can handle rank deficient matrices (both model matrix and cofactor matrix). And, finally, a search for the best hypothesis/model is implemented. Because a geodetic deformation analysis model with all seven elements does not exist, this study develops such a model.For effective standardisation geodetic deformation analysis models need: practical key performance indicators; a clear procedure for using the model; and the possibility to graphically visualise the estimated deformations."
DOCUMENT
From the article: Abstract Adjustment and testing of a combination of stochastic and nonstochastic observations is applied to the deformation analysis of a time series of 3D coordinates. Nonstochastic observations are constant values that are treated as if they were observations. They are used to formulate constraints on the unknown parameters of the adjustment problem. Thus they describe deformation patterns. If deformation is absent, the epochs of the time series are supposed to be related via affine, similarity or congruence transformations. S-basis invariant testing of deformation patterns is treated. The model is experimentally validated by showing the procedure for a point set of 3D coordinates, determined from total station measurements during five epochs. The modelling of two patterns, the movement of just one point in several epochs, and of several points, is shown. Full, rank deficient covariance matrices of the 3D coordinates, resulting from free network adjustments of the total station measurements of each epoch, are used in the analysis.
MULTIFILE
See Springer link - available under Open Access
LINK
The principal aim of this study is to explore the relations between work domains and the work-related learning of workers. The article is intended to provide insight into the learning experiences of Dutch police officers during the course of their daily work. Interviews regarding actual learning events and subsequent changes in knowledge, skills or attitudes were conducted with police officers from different parts of the country and in different stages of their careers. Interpretative analyses grounded in the notion of intentionality and developmental relatedness revealed how and in what kinds of work domains police officers appear to learn. HOMALS analysis showed work-related learning activities to vary with different kinds of work domains. The implications for training and development involve the role of colleagues in different hierarchical positions for learning and they also concern the utility of the conceptualisation of work-related learning presented here.
DOCUMENT
INTRODUCTION: Innovations in head and neck cancer (HNC) treatment are often subject to economic evaluation prior to their reimbursement and subsequent access for patients. Mapping functions facilitate economic evaluation of new treatments when the required utility data is absent, but quality of life data is available. The objective of this study is to develop a mapping function translating the EORTC QLQ-C30 to EQ-5D-derived utilities for HNC through regression modeling, and to explore the added value of disease-specific EORTC QLQ-H&N35 scales to the model.METHODS: Data was obtained on patients with primary HNC treated with curative intent derived from two hospitals. Model development was conducted in two phases: 1. Predictor selection based on theory- and data-driven methods, resulting in three sets of potential predictors from the quality of life questionnaires; 2. Selection of the best out of four methods: ordinary-least squares, mixed-effects linear, Cox and beta regression, using the first set of predictors from EORTC QLQ-C30 scales with most correspondence to EQ-5D dimensions. Using a stepwise approach, we assessed added values of predictors in the other two sets. Model fit was assessed using Akaike and Bayesian Information Criterion (AIC and BIC) and model performance was evaluated by MAE, RMSE and limits of agreement (LOA).RESULTS: The beta regression model showed best model fit, with global health status, physical-, role- and emotional functioning and pain scales as predictors. Adding HNC-specific scales did not improve the model. Model performance was reasonable; R2 = 0.39, MAE = 0.0949, RMSE = 0.1209, 95% LOA of -0.243 to 0.231 (bias -0.01), with an error correlation of 0.32. The estimated shrinkage factor was 0.90.CONCLUSIONS: Selected scales from the EORTC QLQ-C30 can be used to estimate utilities for HNC using beta regression. Including EORTC QLQ-H&N35 scales does not improve the mapping function. The mapping model may serve as a tool to enable cost-effectiveness analyses of innovative HNC treatments, for example for reimbursement issues. Further research should assess the robustness and generalizability of the function by validating the model in an external cohort of HNC patients.
DOCUMENT
Background: The aim of this study is to validate a newly developed nurses' self-efficacy sources inventory. We test the validity of a five-dimensional model of sources of self-efficacy, which we contrast with the traditional four-dimensional model based on Bandura's theoretical concepts. Methods: Confirmatory factor analysis was used in the development of the newly developed self-efficacy measure. Model fit was evaluated based upon commonly recommended goodness-of-fit indices, including the χ2 of the model fit, the Root Mean Square Error of approximation (RMSEA), the Tucker-Lewis Index (TLI), the Standardized Root Mean Square Residual (SRMR), and the Bayesian Information Criterion (BIC). Results: All 22 items of the newly developed five-factor sources of self-efficacy have high factor loadings (range .40-.80). Structural equation modeling showed that a five-factor model is favoured over the four-factor model. Conclusions and implications: Results of this study show that differentiation of the vicarious experience source into a peer- and expert based source reflects better how nursing students develop self-efficacy beliefs. This has implications for clinical learning environments: a better and differentiated use of self-efficacy sources can stimulate the professional development of nursing students.
DOCUMENT
In our highly digitalized society, cybercrime has become a common crime. However, because research into cybercriminals is in its infancy, our knowledge about cybercriminals is still limited. One of the main considerations is whether cybercriminals have higher intellectual capabilities than traditional criminals or even the general population. Although criminological studies clearly show that traditional criminals have lower intellectual capabilities, little is known about the relationship between cybercrime and intelligence. The current study adds to the literature by exploring the relationship between CITO-test scores and cybercrime in the Netherlands. The CITO final test is a standardized test for primary school students - usually taken at the age of 11 or 12 - and highly correlated with IQ-scores. Data from Statistics Netherlands were used to compare CITO-test scores of 143 apprehended cybercriminals with those of 143 apprehended traditional criminals and 143 non-criminals, matched on age, sex, and country of birth. Ordinary Least Squares regression analyses were used to compare CITO test scores between cybercriminals, traditional criminals, and non-criminals. Additionally, a discordant sibling design was used to control for unmeasured confounding by family factors. Findings reveal that cybercriminals have significantly higher CITO test scores compared to traditional criminals and significantly lower CITO test scores compared to non-criminals.
DOCUMENT