Optimizing physical performance is a major goal in current physiology. However, basic understanding of combining high sprint and endurance performance is currently lacking. This study identifies critical determinants of combined sprint and endurance performance using multiple regression analyses of physiologic determinants at different biologic levels. Cyclists, including 6 international sprint, 8 team pursuit, and 14 road cyclists, completed a Wingate test and 15-km time trial to obtain sprint and endurance performance results, respectively. Performance was normalized to lean body mass2/3 to eliminate the influence of body size. Performance determinants were obtained from whole-body oxygen consumption, blood sampling, knee-extensor maximal force, muscle oxygenation, whole-muscle morphology, and muscle fiber histochemistry of musculus vastus lateralis. Normalized sprint performance was explained by percentage of fast-type fibers and muscle volume (R2 = 0.65; P < 0.001) and normalized endurance performance by performance oxygen consumption (V̇o2), mean corpuscular hemoglobin concentration, and muscle oxygenation (R2 = 0.92; P < 0.001). Combined sprint and endurance performance was explained by gross efficiency, performance V̇o2, and likely by muscle volume and fascicle length (P = 0.056; P = 0.059). High performance V̇o2 related to a high oxidative capacity, high capillarization × myoglobin, and small physiologic cross-sectional area (R2 = 0.67; P < 0.001). Results suggest that fascicle length and capillarization are important targets for training to optimize sprint and endurance performance simultaneously.-Van der Zwaard, S., van der Laarse, W. J., Weide, G., Bloemers, F. W., Hofmijster, M. J., Levels, K., Noordhof, D. A., de Koning, J. J., de Ruiter, C. J., Jaspers, R. T. Critical determinants of combined sprint and endurance performance: an integrative analysis from muscle fiber to the human body.
Introduction: A trauma resuscitation is dynamic and complex process in which failures could lead to serious adverse events. In several trauma centers, evaluation of trauma resuscitation is part of a hospital's quality assessment program. While video analysis is commonly used, some hospitals use live observations, mainly due to ethical and medicolegal concerns. The aim of this study was to compare the validity and reliability of video analysis and live observations to evaluate trauma resuscitations. Methods: In this prospective observational study, validity was assessed by comparing the observed adherence to 28 advanced trauma life support (ATLS) guideline related tasks by video analysis to life observations. Interobserver reliability was assessed by calculating the intra class coefficient of observed ATLS related tasks by live observations and video analysis. Results: Eleven simulated and thirteen real-life resuscitations were assessed. Overall, the percentage of observed ATLS related tasks performed during simulated resuscitations was 10.4% (P < 0.001) higher when the same resuscitations were analysed using video compared to live observations. During real-life resuscitations, 8.7% (p < 0.001) more ATLS related tasks were observed using video review compared to live observations. In absolute terms, a mean of 2.9 (during simulated resuscitations) respectively 2.5 (during actual resuscitations) ATLS-related tasks per resuscitation were not identified using live observers, that were observed through video analysis. The interobserver variability for observed ATLS related tasks was significantly higher using video analysis compared to live observations for both simulated (video analysis: ICC 0.97; 95% CI 0.97-0.98 vs. live observation: ICC 0.69; 95% CI 0.57-0.78) and real-life witnessed resuscitations (video analyse 0.99; 95% CI 0.99-1.00 vs live observers 0.86; 95% CI 0.83-0.89). Conclusion: Video analysis of trauma resuscitations may be more valid and reliable compared to evaluation by live observers. These outcomes may guide the debate to justify video review instead of live observations.
Amsterdam’s Schiphol capacity is limited to 500,000 air traffic movements per year and currently is reaching the limit. For that reason, Schiphol Group decided to divert the non-hub related traffic to the regional airport in Lelystad. This airport will be upgraded to handle commercial traffic, mainly low cost carriers. We used a divide and conquer approach in SIMIO modules in which we included the main elements in the system namely airspace, runway, taxiways and airport stands for analyzing the future performance and potential operative problems of the airport. An analysis of the different operative areas of the system was performed and we could identify problems due to the emergent dynamics once the different subsystems interacted between them.
Due to the existing pressure for a more rational use of the water, many public managers and industries have to re-think/adapt their processes towards a more circular approach. Such pressure is even more critical in the Rio Doce region, Minas Gerais, due to the large environmental accident occurred in 2015. Cenibra (pulp mill) is an example of such industries due to the fact that it is situated in the river basin and that it has a water demanding process. The current proposal is meant as an academic and engineering study to propose possible solutions to decrease the total water consumption of the mill and, thus, decrease the total stress on the Rio Doce basin. The work will be divided in three working packages, namely: (i) evaluation (modelling) of the mill process and water balance (ii) application and operation of a pilot scale wastewater treatment plant (iii) analysis of the impacts caused by the improvement of the process. The second work package will also be conducted (in parallel) with a lab scale setup in The Netherlands to allow fast adjustments and broaden evaluation of the setup/process performance. The actions will focus on reducing the mill total water consumption in 20%.
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.
Today, embedded devices such as banking/transportation cards, car keys, and mobile phones use cryptographic techniques to protect personal information and communication. Such devices are increasingly becoming the targets of attacks trying to capture the underlying secret information, e.g., cryptographic keys. Attacks not targeting the cryptographic algorithm but its implementation are especially devastating and the best-known examples are so-called side-channel and fault injection attacks. Such attacks, often jointly coined as physical (implementation) attacks, are difficult to preclude and if the key (or other data) is recovered the device is useless. To mitigate such attacks, security evaluators use the same techniques as attackers and look for possible weaknesses in order to “fix” them before deployment. Unfortunately, the attackers’ resourcefulness on the one hand and usually a short amount of time the security evaluators have (and human errors factor) on the other hand, makes this not a fair race. Consequently, researchers are looking into possible ways of making security evaluations more reliable and faster. To that end, machine learning techniques showed to be a viable candidate although the challenge is far from solved. Our project aims at the development of automatic frameworks able to assess various potential side-channel and fault injection threats coming from diverse sources. Such systems will enable security evaluators, and above all companies producing chips for security applications, an option to find the potential weaknesses early and to assess the trade-off between making the product more secure versus making the product more implementation-friendly. To this end, we plan to use machine learning techniques coupled with novel techniques not explored before for side-channel and fault analysis. In addition, we will design new techniques specially tailored to improve the performance of this evaluation process. Our research fills the gap between what is known in academia on physical attacks and what is needed in the industry to prevent such attacks. In the end, once our frameworks become operational, they could be also a useful tool for mitigating other types of threats like ransomware or rootkits.