Artificial intelligence (AI) integration in Unmanned Aerial Vehicle (UAV) operations has significantly advanced the field through increased autonomy. However, evaluating the critical aspects of these operations remains a challenge. In order to address this, the current study proposes the use of a combination of the 'Observe-Orient-Decide-Act (OODA)' loop and the 'Analytic Hierarchy Process (AHP)' method for evaluating AI-UAV systems. The integration of the OODA loop into AHP aims to assess and weigh the critical components of AI-UAV operations, including (i) perception, (ii) decision-making, and (iii) adaptation. The research compares the results of the AHP evaluation between different groups of UAV operators. The findings of this research identify areas for improvement in AI-UAV systems and guide the development of new technologies. In conclusion, this combined approach offers a comprehensive evaluation method for the current and future state of AI-UAV operations, focusing on operator group comparison.
DOCUMENT
This research reviews the current literature on the impact of Artificial Intelligence (AI) in the operation of autonomous Unmanned Aerial Vehicles (UAVs). This paper examines three key aspects in developing the future of Unmanned Aircraft Systems (UAS) and UAV operations: (i) design, (ii) human factors, and (iii) operation process. The use of widely accepted frameworks such as the "Human Factors Analysis and Classification System (HFACS)" and "Observe– Orient–Decide–Act (OODA)" loops are discussed. The comprehensive review of this research found that as autonomy increases, operator cognitive workload decreases and situation awareness improves, but also found a corresponding decline in operator vigilance and an increase in trust in the AI system. These results provide valuable insights and opportunities for improving the safety and efficiency of autonomous UAVs in the future and suggest the need to include human factors in the development process.
DOCUMENT
Proper decision-making is one of the most important capabilities of an organization. Therefore, it is important to have a clear understanding and overview of the decisions an organization makes. A means to understanding and modeling decisions is the Decision Model and Notation (DMN) standard published by the Object Management Group in 2015. In this standard, it is possible to design and specify how a decision should be taken. However, DMN lacks elements to specify the actors that fulfil different roles in the decision-making process as well as not taking into account the autonomy of machines. In this paper, we re-address and-present our earlier work [1] that focuses on the construction of a framework that takes into account different roles in the decision-making process, and also includes the extent of the autonomy when machines are involved in the decision-making processes. Yet, we extended our previous research with more detailed discussion of the related literature, running cases, and results, which provides a grounded basis from which further research on the governance of (semi) automated decision-making can be conducted. The contributions of this paper are twofold; 1) a framework that combines both autonomy and separation of concerns aspects for decision-making in practice while 2) the proposed theory forms a grounded argument to enrich the current DMN standard.
DOCUMENT
In case of a major cyber incident, organizations usually rely on external providers of Cyber Incident Response (CIR) services. CIR consultants operate in a dynamic and constantly changing environment in which they must actively engage in information management and problem solving while adapting to complex circumstances. In this challenging environment CIR consultants need to make critical decisions about what to advise clients that are impacted by a major cyber incident. Despite its relevance, CIR decision making is an understudied topic. The objective of this preliminary investigation is therefore to understand what decision-making strategies experienced CIR consultants use during challenging incidents and to offer suggestions for training and decision-aiding. A general understanding of operational decision making under pressure, uncertainty, and high stakes was established by reviewing the body of knowledge known as Naturalistic Decision Making (NDM). The general conclusion of NDM research is that experts usually make adequate decisions based on (fast) recognition of the situation and applying the most obvious (default) response pattern that has worked in similar situations in the past. In exceptional situations, however, this way of recognition-primed decision-making results in suboptimal decisions as experts are likely to miss conflicting cues once the situation is quickly recognized under pressure. Understanding the default response pattern and the rare occasions in which this response pattern could be ineffective is therefore key for improving and aiding cyber incident response decision making. Therefore, we interviewed six experienced CIR consultants and used the critical decision method (CDM) to learn how they made decisions under challenging conditions. The main conclusion is that the default response pattern for CIR consultants during cyber breaches is to reduce uncertainty as much as possible by gathering and investigating data and thus delay decision making about eradication until the investigation is completed. According to the respondents, this strategy usually works well and provides the most assurance that the threat actor can be completely removed from the network. However, the majority of respondents could recall at least one case in which this strategy (in hindsight) resulted in unnecessary theft of data or damage. Interestingly, this finding is strikingly different from other operational decision-making domains such as the military, police and fire service in which there is a general tendency to act rapidly instead of searching for more information. The main advice is that training and decision aiding of (novice) cyber incident responders should be aimed at the following: (a) make cyber incident responders aware of how recognition-primed decision making works; (b) discuss the default response strategy that typically works well in several scenarios; (c) explain the exception and how the exception can be recognized; (d) provide alternative response strategies that work better in exceptional situations.
DOCUMENT