The last decade has seen an increasing demand from the industrial field of computerized visual inspection. Applications rapidly become more complex and often with more demanding real time constraints. However, from 2004 onwards the clock frequency of CPUs has not increased significantly. Computer Vision applications have an increasing demand for more processing power but are limited by the performance capabilities of sequential processor architectures. The only way to get more performance using commodity hardware, like multi-core processors and graphics cards, is to go for parallel programming. This article focuses on the practical question: How can the processing time for vision algorithms be improved, by parallelization, in an economical way and execute them on multiple platforms?
DOCUMENT
Estimation of the factor model by unweighted least squares (ULS) is distribution free, yields consistent estimates, and is computationally fast if the Minimum Residuals (MinRes) algorithm is employed. MinRes algorithms produce a converging sequence of monotonically decreasing ULS function values. Various suggestions for algorithms of the MinRes type are made for confirmatory as well as for exploratory factor analysis. These suggestions include the implementation of inequality constraints and the prevention of Heywood cases. A simulation study, comparing the bootstrap standard deviations for the parameters with the standard errors from maximum likelihood, indicates that these are virtually equal when the score vectors are sampled from the normal distribution. Two empirical examples demonstrate the usefulness of constrained exploratory and confirmatory factor analysis by ULS used in conjunction with the bootstrap method.
DOCUMENT
The aviation industry needs led to an increase in the number of aircraft in the sky. When the number of flights within an airspace increases, the chance of a mid-air collision increases. Systems such as the Traffic Alert and Collision Avoidance System (TCAS) and Airborne Collision Avoidance System (ACAS) are currently used to alert pilots for potential mid-air collisions. The TCAS and the ACAS use algorithms to perform Aircraft Trajectory Predictions (ATPs) to detect potential conflicts between aircrafts. In this paper, three different aircraft trajectory prediction algorithms named Deep Neural Network (DNN), Random Forest (RF) and Extreme Gradient Boosting were implemented and evaluated in terms of their accuracy and robustness to predict the future aircraft heading. These algorithms were as well evaluated in the case of adversarial samples. Adversarial training is applied as defense method in order to increase the robustness of ATPs algorithms against the adversarial samples. Results showed that, comparing the three algorithm’s performance, the extreme gradient boosting algorithm was the most robust against adversarial samples and adversarial training may benefit the robustness of the algorithms against lower intense adversarial samples. The contributions of this paper concern the evaluation of different aircraft trajectory prediction algorithms, the exploration of the effects of adversarial attacks, and the effect of the defense against adversarial samples with low perturbation compared to no defense mechanism.
DOCUMENT
Aircraft require significant quantities of fuel in order to generate the power required to sustain a flight. Burning this fuel causes the release of polluting particles to the atmosphere and constitutes a direct cost attributed to fuel consumption. The optimization of various aircraft operations in different flight phases such as cruise and descent, as well as terminal area movements, have been identified as a way to reduce fuel requirements, thus reducing pollution. The goal of this chapter is to briefly explain and apply different metaheuristic optimization algorithms to improve the cruise flight phase cost in terms of fuel burn. Another goal is to present an overview of the most popular commercial aircraft models. The algorithms implemented for different optimization strategies are genetic algorithms, the artificial bee colony, and the ant colony algorithm. The fuel burn aircraft model used here is in the form of a Performance Database. A methodology to create this model using a Level D aircraft research flight simulator is briefly explained. Weather plays an important role in flight optimization, and so this work explains a method for incorporating open source weather. The results obtained for the optimization algorithms show that every optimization algorithm was able to reduce the flight consumption, thereby reducing the pollution emissions and contributing to airlines’ profit margins.
DOCUMENT
Machine learning models have proven to be reliable methods in classification tasks. However, little research has been done on classifying dwelling characteristics based on smart meter & weather data before. Gaining insights into dwelling characteristics can be helpful to create/improve the policies for creating new dwellings at NZEB standard. This paper compares the different machine learning algorithms and the methods used to correctly implement the models. These methods include the data pre-processing, model validation and evaluation. Smart meter data was provided by Groene Mient, which was used to train several machine learning algorithms. The models that were generated by the algorithms were compared on their performance. The results showed that Recurrent Neural Network (RNN) 2performed the best with 96% of accuracy. Cross Validation was used to validate the models, where 80% of the data was used for training purposes and 20% was used for testing purposes. Evaluation metrices were used to produce classification reports, which can indicate which of the models work the best for this specific problem. The models were programmed in Python.
DOCUMENT
Summary: Xpaths is a collection of algorithms that allow for the prediction of compound-induced molecular mechanisms of action by integrating phenotypic endpoints of different species; and proposes follow-up tests for model organisms to validate these pathway predictions. The Xpaths algorithms are applied to predict developmental and reproductive toxicity (DART) and implemented into an in silico platform, called DARTpaths.
DOCUMENT
Conducting large calculations manually with pen and paper following prescribed procedures or algorithms has been diminishing in significance for some time. In most cultures, and for many years already, individuals employ digital instruments for such computational tasks, when confronted with them in daily life. Yet, a closer examination of prevalent practices in the teaching of basic numeracy skills in adult education reveals a persistent emphasis on mastering standardized manual calculation techniques, especially with abstract and decontextualized numbers. This emphasis predominantly stems from the belief that mastering these manual procedures forms the cornerstone of all numeracy abilities. Contrastingly, our research indicates that the numeracy skills most frequently utilized and required in contemporary professions and daily activities encompass higher-order capabilities (Hoogland and Stoker, 2021; Boels et al., 2022; Hoogland and Díez-Palomar, 2022). These include interpretation, reasoning, mathematizing, estimation, critical reflection on quantitative data, and the application of digital instruments for computation. It is imperative, therefore, that numeracy education for adults prioritizes these competencies to achieve efficacy.
LINK
The Gate Assignment Problem is tackled every day by different airports around the world. Assigning aircraft to gates has different associated costs. Conventional algorithms use mathematical models where only some assignment restrictions are considered. The approach proposed in this paper presents the novelty of coupling an optimization algorithm with an airport gate allocation simulator which provides the whole ensemble of assignment costs and restrictions. This approach allows using the simulator as the substitute for an assignment cost function in traditional algorithms. The proposed methodology starts with a feasible solution provided by the simulator. The framework proposed in this work improved the solution proposed by the simulator to a substantial extent.
LINK
In this paper, we explore the design of web-based advice robots to enhance users' confidence in acting upon the provided advice. Drawing from research on algorithm acceptance and explainable AI, we hypothesise four design principles that may encourage interactivity and exploration, thus fostering users' confidence to act. Through a value-oriented prototype experiment and valueoriented semi-structured interviews, we tested these principles, confirming three of them and identifying an additional principle. The four resulting principles: (1) put context questions and resulting advice on one page and allow live, iterative exploration, (2) use action or change oriented questions to adjust the input parameters, (3) actively offer alternative scenarios based on counterfactuals, and (4) show all options instead of only the recommended one(s), appear to contribute to the values of agency and trust. Our study integrates the Design Science Research approach with a Value Sensitive Design approach.
DOCUMENT