In case of a major cyber incident, organizations usually rely on external providers of Cyber Incident Response (CIR) services. CIR consultants operate in a dynamic and constantly changing environment in which they must actively engage in information management and problem solving while adapting to complex circumstances. In this challenging environment CIR consultants need to make critical decisions about what to advise clients that are impacted by a major cyber incident. Despite its relevance, CIR decision making is an understudied topic. The objective of this preliminary investigation is therefore to understand what decision-making strategies experienced CIR consultants use during challenging incidents and to offer suggestions for training and decision-aiding. A general understanding of operational decision making under pressure, uncertainty, and high stakes was established by reviewing the body of knowledge known as Naturalistic Decision Making (NDM). The general conclusion of NDM research is that experts usually make adequate decisions based on (fast) recognition of the situation and applying the most obvious (default) response pattern that has worked in similar situations in the past. In exceptional situations, however, this way of recognition-primed decision-making results in suboptimal decisions as experts are likely to miss conflicting cues once the situation is quickly recognized under pressure. Understanding the default response pattern and the rare occasions in which this response pattern could be ineffective is therefore key for improving and aiding cyber incident response decision making. Therefore, we interviewed six experienced CIR consultants and used the critical decision method (CDM) to learn how they made decisions under challenging conditions. The main conclusion is that the default response pattern for CIR consultants during cyber breaches is to reduce uncertainty as much as possible by gathering and investigating data and thus delay decision making about eradication until the investigation is completed. According to the respondents, this strategy usually works well and provides the most assurance that the threat actor can be completely removed from the network. However, the majority of respondents could recall at least one case in which this strategy (in hindsight) resulted in unnecessary theft of data or damage. Interestingly, this finding is strikingly different from other operational decision-making domains such as the military, police and fire service in which there is a general tendency to act rapidly instead of searching for more information. The main advice is that training and decision aiding of (novice) cyber incident responders should be aimed at the following: (a) make cyber incident responders aware of how recognition-primed decision making works; (b) discuss the default response strategy that typically works well in several scenarios; (c) explain the exception and how the exception can be recognized; (d) provide alternative response strategies that work better in exceptional situations.
DOCUMENT
This paper conducted a preliminary study of reviewing and exploring bias strategies using a framework of a different discipline: change management. The hypothesis here is: If the major problem of implicit bias strategies is that they do not translate into actual changes in behaviors, then it could be helpful to learn from studies that have contributed to successful change interventions such as reward management, social neuroscience, health behavioral change, and cognitive behavioral therapy. The result of this integrated approach is: (1) current bias strategies can be improved and new ones can be developed with insight from adjunct study fields in change management; (2) it could be more sustainable to invest in a holistic and proactive bias strategy approach that targets the social environment, eliminating the very condition under which biases arise; and (3) while implicit biases are automatic, future studies should invest more on strategies that empower people as “change agents” who can act proactively to regulate the very environment that gives rise to their biased thoughts and behaviors.
DOCUMENT
Albeit the widespread application of recommender systems (RecSys) in our daily lives, rather limited research has been done on quantifying unfairness and biases present in such systems. Prior work largely focuses on determining whether a RecSys is discriminating or not but does not compute the amount of bias present in these systems. Biased recommendations may lead to decisions that can potentially have adverse effects on individuals, sensitive user groups, and society. Hence, it is important to quantify these biases for fair and safe commercial applications of these systems. This paper focuses on quantifying popularity bias that stems directly from the output of RecSys models, leading to over recommendation of popular items that are likely to be misaligned with user preferences. Four metrics to quantify popularity bias in RescSys over time in dynamic setting across different sensitive user groups have been proposed. These metrics have been demonstrated for four collaborative filteri ng based RecSys algorithms trained on two commonly used benchmark datasets in the literature. Results obtained show that the metrics proposed provide a comprehensive understanding of growing disparities in treatment between sensitive groups over time when used conjointly.
DOCUMENT