Physical activity is crucial in human life, whether in everyday activities or elite sports. It is important to maintain or improve physical performance, which depends on various factors such as the amount of physical activity, the capability, and the capacity of the individual. In daily life, it is significant to be physically active to maintain good health, intense exercise is not necessary, as simple daily activities contribute enough. In sports, it is essential to balance capacity, workload, and recovery to prevent performance decline or injury.With the introduction of wearable technology, it has become easier to monitor and analyse physical activity and performance data in daily life and sports. However, extracting personalised insights and predictions from the vast and complex data available is still a challenge.The study identified four main problems in data analytics related to physical activity and performance: limited personalised prediction due to data constraints, vast data complexity, need for sensitive performance measures, overly simplified models, and missing influential variables. We proposed end investigated potential solutions for each issue. These solutions involve leveraging personalised data from wearables, combining sensitive performance measures with various machine learning algorithms, incorporating causal modelling, and addressing the absence of influential variables in the data.Personalised data, machine learning, sensitive performance measures, advanced statistics, and causal modelling can help bridge the data analytics gap in understanding physical activity and performance. The research findings pave the way for more informed interventions and provide a foundation for future studies to further reduce this gap.
LINK
''Ever increasing population in seismically active urban areas, aging building stock, and expansion of urbanization to previously agricultural lands with soft soil deposits render the protection of human lives against earthquake disasters extremely more difficult by the time. Although much effort is put in further improving the current seismic design practices for new buildings, recent earthquakes show us, again and again, that life losses occur in older and much more vulnerable structures. Finding those substandard, collapse-vulnerable buildings before a destructive earthquake is like finding a needle in a haystack. It is clear that the problem in hand cannot be addressed with the existing, and mostly old-fashioned tools anymore.This manuscript focuses on how the emerging technologies, such as Artificial Intelligence, image processing, and data sciences in general, can be implemented as useful tools for conducting an urban scale seismic risk assessment while estimating the risk for every individual building. A review of the available technologies is given for the exposure component. Furthermore, a novel method of estimating the vulnerability of individual buildings, based on autoregressive machine learning algorithms, is presented. The manuscript discusses that the technological advancement is mature enough to radically alter how the earthquake risk is estimated.''
DOCUMENT
In recent decades, the number of cases of knee arthroplasty among people of working age has increased. The integrated clinical pathway ‘back at work after surgery’ is an initiative to reduce the possible cost of sick leave. The evaluation of this pathway, like many clinical studies, faces the challenge of small data sets with a relatively high number of features. In this study, we investigate the possibility of identifying features that are important in determining the duration of rehabilitation, expressed in the return-to-work period, by using feature selection tools. Several models are used to classify the patient’s data into two classes, and the results are evaluated based on the accuracy and the quality of the ordering of the features, for which we introduce a ranking score. A selection of estimators are used in an optimization step, reorganizing the feature ranking. The results show that for some models, the proposed optimization results in a better ordering of the features. The ordering of the features is evaluated visually and identified by the ranking score. Furthermore, for all models, higher accuracy, with a maximum of 91%, is achieved by applying the optimization process. The features that are identified as relevant for the duration of the return-to-work period are discussed and provide input for further research.
DOCUMENT
The paper explores the effectiveness of automated clustering in personalized applications based on data characteristics. It evaluates three clustering algorithms with various cluster numbers and subsets of characteristics. The study compares the accuracy of models in different clusters against original results and examines the algorithmic approaches and characteristic selections for optimal clustering performance. The research concludes that the proposed method aids in selecting appropriate clustering strategies and relevant characteristics for datasets. These insights may also guide further research on coaching approaches within applications.
DOCUMENT
In the rapidly evolving field of Machine Learning , selecting the most appropriate model for a given dataset is crucial. Understanding the characteristics of a dataset can significantly influence the outcomes of predictive modeling efforts, making the study of the properties of the dataset an essential component of data science. This study investigates the possibilities of using simulated human data for personalized applications, specifically for testing clustering approaches. In particular, the study focuses on the relationship between dataset characteristics and the selection of the optimal classification model for clusters of datasets. The results of this study provide critical insights for researchers and practitioners in machine learning, emphasizing the importance of dataset characteristics and variability in building and selecting robust models for diverse data conditions. The use of human simulation data provide valuable insights but requires further refinement to capture the full variability of real-world conditions.
DOCUMENT
Context:Rapid developments and adoption of machine learning-based software solutions have enabled novel ways to tackle our societal problems. The ongoing digital transformation has led to the incorporation of these software solutions in just about every application domain. Software architecture for machine learning applications used during sustainable digital transformation can potentially aid the evolution of the underlying software system adding to its sustainability over time.Objective:Software architecture for machine learning applications in general is an open research area. When applying it to sustainable digital transformation it is not clear which of its considerations actually apply in this context. We therefore aim to understand how the topics of sustainable digital transformation, software architecture, and machine learning interact with each other.Methods:We perform a systematic mapping study to explore the scientific literature on the intersection of sustainable digital transformation, machine learning and software architecture.Results:We have found that the intersection of interest is small despite the amount of works on its individual aspects, and not all dimensions of sustainability are represented equally. We also found that application domains are diverse and include many important sectors and industry groups. At the same time, the perceived level of maturity of machine learning adoption by existing works seems to be quite low.Conclusion:Our findings show an opportunity for further software architecture research to aid sustainable digital transformation, especially by building on the emerging practice of machine learning operations.
DOCUMENT
Human Digital Twins are an emerging type of Digital Twin used in healthcare to provide personalized support. Following this trend, we intend to elevate our virtual fitness coach, a coaching platform using wearable data on physical activity, to the level of a personalized Human Digital Twin. Preliminary investigations revealed a significant difference in performance, as measured by prediction accuracy and F1-score, between the optimal choice of machine learning algorithms for generalized and personalized processing of the available data. Based on these findings, this survey aims to establish the state of the art in the selection and application of machine learning algorithms in Human Digital Twin applications in healthcare. The survey reveals that, unlike general machine learning applications, there is a limited body of literature on optimization and the application of meta-learning in personalized Human Digital Twin solutions. As a conclusion, we provide direction for further research, formulated in the following research question: how can the optimization of human data feature engineering and personalized model selection be achieved in Human Digital Twins and can techniques such as meta-learning be of use in this context?
DOCUMENT
ICT is veel meer dan een hulpmiddel bij onderwijs en opleiding: zij provoceert een voortdurend nieuwe kijk op de essentie van leren en daarmee ook op het leraarschap. Opvallend is dat ICT in onderwijs penetreert nog voordat er enig model of theorievorming over haar bijdrage gevormd is; dat is pragmatisch en opportunistisch. Sterker nog: als we achteraf kijken naar hervormingen van onderwijsopvattingen, dan worden ze vaak aangedreven door technologische innovaties op dat moment: de entree van de boekdruk, telecommunicatie, computersystemen en virtuele realiteit. Binnenkort zullen we ingrijpende invloeden zien vanuit de biotechnologie, genetische modificatie, nanotechnologie etcetera. De huidige stap van laptop naar het veelkunnende mobieltje is er slechts één van de lange rij ICT-hulpmiddelen die er nog aan gaan komen. Als we de trend van ICT in onderwijs doortrekken, dan valt te verwachten dat 'mobiel leren' vooral zal leiden tot 'ubiquitous learning': overal- en voortdurend leren. Het begrip 'learning by heart' krijgt opnieuw betekenis: niet alleen het 'van buiten' leren, maar het opbouwen van een relatie met het onderwerp dat je bestudeert. De persoon van de docent wordt nog belangrijker dan hij nu al is. Mobiele communicatie gaat haar eerste vruchten afwerpen bij het 'voortdurend leren' van de docent. Het mobieltje en de on-line PDA gaan hierin een cruciale rol spelen. De Fontys lerarenopleidingen nemen met enthousiasme deze voortrekkersrol op zich. Het lectoraat Educatieve Functies van ICT begeleidt docenten en promovendi hierbij.
DOCUMENT
Deze publicatie geeft gerichte theoretische en praktische informatie ten behoeve van respectievelijk de gebruikers van de diverse machines en gereedschappen welke bij het omvormproces (dieptrekken, kraagtrekken, strekken, alsmede buigen en scheiden) worden gebruikt, geïnteresseerden in de betreffende processen, technische cursussen en opleidingen. De inhoud van deze publicatie behandelt de belangrijkste machines en gereedschappen, alsmede aanvullende informatie welke bij het vormgeven van dunne plaat van belang zijn. In de voorlichtingspublicaties VM 110 "Dieptrekken", VM 113 "Buigen" alsmede VM 114 "Scheiden" vindt u gegevens m.b.t. de diverse omvormprocessen en in VM 111 "Materialen" worden de hierbij gebruikte materialen behandeld. Deze voorlichtingspublicatie is een update van de in 2000 verschenen eerste druk, welke toentertijd is samengesteld door de werkgroep "Dieptrekken van dunne plaat, staal, aluminium". In het kader van een updateproject heeft het NIMR, inmiddels M2i (Materials innovation institute) geheten, geld ter beschikking gesteld om deze publicaties te vernieuwen en aan te passen aan de huidige stand der techniek.
DOCUMENT
Metadateren van bewegende beelden, een gat in de markt voor de slimme catalogiseerder? Waarom ook niet? In de toekomst zijn we nog veel meer dan nu visueel georiënteerd, en de computer kan geen beelden 'zien'. Of toch?
DOCUMENT