In this paper we analyse the way students tag recorded lectures. We compare their tagging strategy and the tags that they create with tagging done by an expert. We look at the quality of the tags students add, and we introduce a method of measuring how similar the tags are, using vector space modelling and cosine similarity. We show that the quality of tagging by students is high enough to be useful. We also show that there is no generic vocabulary gap between the expert and the students. Our study shows no statistically significant correlation between the tag similarity and the indicated interest in the course, the perceived importance of the course, the number of lectures attended, the indicated difficulty of the course, the number of recorded lectures viewed, the indicated ease of finding the needed parts of a recorded lecture, or the number of tags used by the student.
LINK
Presentatie Nederlandse Vereniging voor Criminlogie Congres 2022
DOCUMENT
Several models in data analysis are estimated by minimizing the objective function defined as the residual sum of squares between the model and the data. A necessary and sufficient condition for the existence of a least squares estimator is that the objective function attains its infimum at a unique point. It is shown that the objective function for Parafac-2 need not attain its infimum, and that of DEDICOM, constrained Parafac-2, and, under a weak assumption, SCA and Dynamals do attain their infimum. Furthermore, the sequence of parameter vectors, generated by an alternating least squares algorithm, converges if it decreases the objective function to its infimum which is attained at one or finitely many points.
LINK
Estimation of the factor model by unweighted least squares (ULS) is distribution free, yields consistent estimates, and is computationally fast if the Minimum Residuals (MinRes) algorithm is employed. MinRes algorithms produce a converging sequence of monotonically decreasing ULS function values. Various suggestions for algorithms of the MinRes type are made for confirmatory as well as for exploratory factor analysis. These suggestions include the implementation of inequality constraints and the prevention of Heywood cases. A simulation study, comparing the bootstrap standard deviations for the parameters with the standard errors from maximum likelihood, indicates that these are virtually equal when the score vectors are sampled from the normal distribution. Two empirical examples demonstrate the usefulness of constrained exploratory and confirmatory factor analysis by ULS used in conjunction with the bootstrap method.
DOCUMENT
We Are Not Sick is a hybrid lecture/music performance by Geert Lovink and John Longwalker. Combining a diversity of text-, image-, and music-genres, the project reflects on the encroaching sadness provoked by social media architectures.Through this project they push for new modalities in both music and critical theory, to shake up both the dance floor and the lecture circuit. Utilizing a range of electronic musical genres for maximum reach, the Sad By Design album is not a soundtrack to a book of theory but rather a new attempt at expressing those same themes, using the same words but achieving different vectors of critique. Sad By Design is a ‘carrier wave for critical theory,’ crafted over two years of refining our answer to the question of what this new ‘critical music’ hybrid feels like to experience.
LINK
From the article: Abstract Adjustment and testing of a combination of stochastic and nonstochastic observations is applied to the deformation analysis of a time series of 3D coordinates. Nonstochastic observations are constant values that are treated as if they were observations. They are used to formulate constraints on the unknown parameters of the adjustment problem. Thus they describe deformation patterns. If deformation is absent, the epochs of the time series are supposed to be related via affine, similarity or congruence transformations. S-basis invariant testing of deformation patterns is treated. The model is experimentally validated by showing the procedure for a point set of 3D coordinates, determined from total station measurements during five epochs. The modelling of two patterns, the movement of just one point in several epochs, and of several points, is shown. Full, rank deficient covariance matrices of the 3D coordinates, resulting from free network adjustments of the total station measurements of each epoch, are used in the analysis.
MULTIFILE
Author supplied: "This paper gives a linearised adjustment model for the affine, similarity and congruence transformations in 3D that is easily extendable with other parameters to describe deformations. The model considers all coordinates stochastic. Full positive semi-definite covariance matrices and correlation between epochs can be handled. The determination of transformation parameters between two or more coordinate sets, determined by geodetic monitoring measurements, can be handled as a least squares adjustment problem. It can be solved without linearisation of the functional model, if it concerns an affine, similarity or congruence transformation in one-, two- or three-dimensional space. If the functional model describes more than such a transformation, it is hardly ever possible to find a direct solution for the transformation parameters. Linearisation of the functional model and applying least squares formulas is then an appropriate mode of working. The adjustment model is given as a model of observation equations with constraints on the parameters. The starting point is the affine transformation, whose parameters are constrained to get the parameters of the similarity or congruence transformation. In this way the use of Euler angles is avoided. Because the model is linearised, iteration is necessary to get the final solution. In each iteration step approximate coordinates are necessary that fulfil the constraints. For the affine transformation it is easy to get approximate coordinates. For the similarity and congruence transformation the approximate coordinates have to comply to constraints. To achieve this, use is made of the singular value decomposition of the rotation matrix. To show the effectiveness of the proposed adjustment model total station measurements in two epochs of monitored buildings are analysed. Coordinate sets with full, rank deficient covariance matrices are determined from the measurements and adjusted with the proposed model. Testing the adjustment for deformations results in detection of the simulated deformations."
MULTIFILE
See Springer link - available under Open Access
LINK
Common cloning is often associated with instability of certain classes of DNA. Here we report on IS1 transposition as possible source of such instability. During the cloning of Arabidopsis thaliana gene into commercially available vector maintained in widely used Escherichia coli host the insertion of complete IS1 element into the intron of cloned gene was found. The transposition of the IS1 element was remarkably rapid and is likely to be sequence-specific. The use of E. coli strains that lower the copy number of vector or avoiding the presence of the problematic sequence is a solution to the inadvertent transposition of IS1. The transposition of IS1 is rare but it can occur and might confound functional studies of a plant gene.
DOCUMENT
Preprint submitted to Information Processing & Management Tags are a convenient way to label resources on the web. An interesting question is whether one can determine the semantic meaning of tags in the absence of some predefined formal structure like a thesaurus. Many authors have used the usage data for tags to find their emergent semantics. Here, we argue that the semantics of tags can be captured by comparing the contexts in which tags appear. We give an approach to operationalizing this idea by defining what we call paradigmatic similarity: computing co-occurrence distributions of tags with tags in the same context, and comparing tags using information theoretic similarity measures of these distributions, mostly the Jensen-Shannon divergence. In experiments with three different tagged data collections we study its behavior and compare it to other distance measures. For some tasks, like terminology mapping or clustering, the paradigmatic similarity seems to give better results than similarity measures based on the co-occurrence of the documents or other resources that the tags are associated to. We argue that paradigmatic similarity, is superior to other distance measures, if agreement on topics (as opposed to style, register or language etc.), is the most important criterion, and the main differences between the tagged elements in the data set correspond to different topics
DOCUMENT