Within recent years, Financial Credit Risk Assessment (FCRA) has become an increasingly important issue within the financial industry. Therefore, the search for features that can predict the credit risk of an organization has increased. Using multiple statistical techniques, a variance of features has been proposed. Applying a structured literature review, 258 papers have been selected. From the selected papers, 835 features have been identified. The features have been analyzed with respect to the type of feature, the information sources needed and the type of organization that applies the features. Based on the results of the analysis, the features have been plotted in the FCRA Model. The results show that most features focus on hard information from a transactional source, based on official information with a high latency. In this paper, we readdress and -present our earlier work [1]. We extended the previous research with more detailed descriptions of the related literature, findings, and results, which provides a grounded basis from which further research on FCRA can be conducted.
DOCUMENT
Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
DOCUMENT
The implementation of marine spatial plans as required by the Directive on Maritime Spatial Planning (MSP) of the European Union (EU) poses novel demands for the development of decision support tools (DST). One fundamental aspect is the need for tools to guide decisions about the allocation of human activities at sea in ways that are ecosystem-based and lead to sustainable use of resources. The MSP Directive was the main driver behind the development of spatial and non-spatial DSTs for the analysis of marine and coastal areas across European seas. In this research we develop an analytical framework designed by DST software developers and managers for the analysis of six DSTs supporting MSP in the Baltic Sea, the North Sea, and the Mediterranean Sea. The framework compares the main conceptual, technical and practical aspects, by which these DSTs contribute to advancing the MSP knowledge base and identified future needs for the development of the tools. Results show that all of the studied DSTs include elements to support ecosystem-based management at different geographical scales (from national to macro-regional), relying on cumulative effects assessment and functionalities to facilitate communication at the science-policy interface. Based on our synthesis we propose a set of recommendations for knowledge exchange in relation to further DST developments, mechanisms for sharing experience among the user-developer community, and actions to increase the effectiveness of the DSTs in MSP processes.
LINK