This paper introduces a novel distributed algorithm designed to optimize the deployment of access points within Mobile Ad Hoc Networks (MANETs) for better service quality in infrastructure less environments. The algorithm operates based on local, independent execution by each network node, thus ensuring a high degree of scalability and adaptability to changing network conditions. The primary focus is to match the spatial distribution of access points with the distribution of client devices while maintaining strong connectivity to the network root. Using autonomous decision-making and choreographed path-planning, this algorithm bridges the gap between demand-responsive network service provision and the maintenance of crucial network connectivity links. The assessment of the performance of this approach is motivated by using numerical results generated by simulations.
DOCUMENT
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE
Peer-to-peer (P2P) energy trading has been recognized as an important technology to increase the local self-consumption of photovoltaics in the local energy system. Different auction mechanisms and bidding strategies haven been investigated in previous studies. However, there has been no comparatively analysis on how different market structures influence the local energy system’s overall performance. This paper presents and compares two market structures, namely a centralized market and a decentralized market. Two pricing mechanisms in the centralized market and two bidding strategies in the decentralized market are developed. The results show that the centralized market leads to higher overall system self-consumption and profits. In the decentralized market, some electricity is directly sold to the grid due to unmatchable bids and asks. Bidding strategies based on the learning algorithm can achieve better performance compared to the random method.
DOCUMENT
Summary: Xpaths is a collection of algorithms that allow for the prediction of compound-induced molecular mechanisms of action by integrating phenotypic endpoints of different species; and proposes follow-up tests for model organisms to validate these pathway predictions. The Xpaths algorithms are applied to predict developmental and reproductive toxicity (DART) and implemented into an in silico platform, called DARTpaths.
DOCUMENT
In this post I give an overview of the theory, tools, frameworks and best practices I have found until now around the testing (and debugging) of machine learning applications. I will start by giving an overview of the specificities of testing machine learning applications.
LINK
Thermal comfort -the state of mind, which expresses satisfaction with the thermal environment- is an important aspect of the building design process as modern man spends most of the day indoors. This paper reviews the developments in indoor thermal comfort research and practice since the second half of the 1990s, and groups these developments around two main themes; (i) thermal comfort models and standards, and (ii) advances in computerization. Within the first theme, the PMV-model (Predicted Mean Vote), created by Fanger in the late 1960s is discussed in the light of the emergence of models of adaptive thermal comfort. The adaptive models are based on adaptive opportunities of occupants and are related to options of personal control of the indoor climate and psychology and performance. Both models have been considered in the latest round of thermal comfort standard revisions. The second theme focuses on the ever increasing role played by computerization in thermal comfort research and practice, including sophisticated multi-segmental modeling and building performance simulation, transient thermal conditions and interactions, thermal manikins.
DOCUMENT
From the article: Abstract Over the last decades, philosophers and cognitive scientists have argued that the brain constitutes only one of several contributing factors to cognition, the other factors being the body and the world. This position we refer to as Embodied Embedded Cognition (EEC). The main purpose of this paper is to consider what EEC implies for the task interpretation of the control system. We argue that the traditional view of the control system as involved in planning and decision making based on beliefs about the world runs into the problem of computational intractability. EEC views the control system as relying heavily on the naturally evolved fit between organism and environment. A ‘lazy’ control structure could be ‘ignorantly successful’ in a ‘user friendly’ world, by facilitating the transitory creation of a flexible and integrated set of behavioral layers that are constitutive of ongoing behavior. We close by discussing the types of questions this could imply for empirical research in cognitive neuroscience and robotics.
LINK
Both Software Engineering and Machine Learning have become recognized disciplines. In this article I analyse the combination of the two: engineering of machine learning applications. I believe the systematic way of working for machine learning applications is at certain points different from traditional (rule-based) software engineering. The question I set out to investigate is “How does software engineering change when we develop machine learning applications”?. This question is not an easy to answer and turns out to be a rather new, with few publications. This article collects what I have found until now.
LINK
From the article: "This paper describes the process of introducing blended learning in a CS educational program. The methodology that has been used as well as the motivation for the choices made are given. The rst results compared with results from previous courses that used a more classical teaching approach are given. These results show that the new methodology proves to be promising and successful. The successes of the new program as well as the problems encountered are discussed with their possible solution."
MULTIFILE
We present a novel architecture for an AI system that allows a priori knowledge to combine with deep learning. In traditional neural networks, all available data is pooled at the input layer. Our alternative neural network is constructed so that partial representations (invariants) are learned in the intermediate layers, which can then be combined with a priori knowledge or with other predictive analyses of the same data. This leads to smaller training datasets due to more efficient learning. In addition, because this architecture allows inclusion of a priori knowledge and interpretable predictive models, the interpretability of the entire system increases while the data can still be used in a black box neural network. Our system makes use of networks of neurons rather than single neurons to enable the representation of approximations (invariants) of the output.
LINK