As Vehicle-to-Everything (V2X) communication technologies gain prominence, ensuring human safety from radiofrequency (RF) electromagnetic fields (EMF) becomes paramount. This study critically examines human RF exposure in the context of ITS-5.9 GHz V2X connectivity, employing a combination of numerical dosimetry simulations and targeted experimental measurements. The focus extends across Road-Side Units (RSUs), On-Board Units (OBUs), and, notably, the advanced vehicular technologies within a Tesla Model S, which includes Bluetooth, Long Term Evolution (LTE) modules, and millimeter-wave (mmWave) radar systems. Key findings indicate that RF exposure levels for RSUs and OBUs, as well as from Tesla’s integrated technologies, consistently remain below the International Commission on Non-Ionizing Radiation Protection (ICNIRP) exposure guidelines by a significant margin. Specifically, the maximum exposure level around RSUs was observed to be 10 times lower than ICNIRP reference level, and Tesla’s mmWave radar exposure did not exceed 0.29 W/m2, well below the threshold of 10 W/m2 set for the general public. This comprehensive analysis not only corroborates the effectiveness of numerical dosimetry in accurately predicting RF exposure but also underscores the compliance of current V2X communication technologies with exposure guidelines, thereby facilitating the protective advancement of intelligent transportation systems against potential health risks.
MULTIFILE
Analyzing historical decision-related data can help support actual operational decision-making processes. Decision mining can be employed for such analysis. This paper proposes the Decision Discovery Framework (DDF) designed to develop, adapt, or select a decision discovery algorithm by outlining specific guidelines for input data usage, classifier handling, and decision model representation. This framework incorporates the use of Decision Model and Notation (DMN) for enhanced comprehensibility and normalization to simplify decision tables. The framework’s efficacy was tested by adapting the C4.5 algorithm to the DM45 algorithm. The proposed adaptations include (1) the utilization of a decision log, (2) ensure an unpruned decision tree, (3) the generation DMN, and (4) normalize decision table. Future research can focus on supporting on practitioners in modeling decisions, ensuring their decision-making is compliant, and suggesting improvements to the modeled decisions. Another future research direction is to explore the ability to process unstructured data as input for the discovery of decisions.
MULTIFILE
Abstract Despite the numerous business benefits of data science, the number of data science models in production is limited. Data science model deployment presents many challenges and many organisations have little model deployment knowledge. This research studied five model deployments in a Dutch government organisation. The study revealed that as a result of model deployment a data science subprocess is added into the target business process, the model itself can be adapted, model maintenance is incorporated in the model development process and a feedback loop is established between the target business process and the model development process. These model deployment effects and the related deployment challenges are different in strategic and operational target business processes. Based on these findings, guidelines are formulated which can form a basis for future principles how to successfully deploy data science models. Organisations can use these guidelines as suggestions to solve their own model deployment challenges.
DOCUMENT
Completeness of data is vital for the decision making and forecasting on Building Management Systems (BMS) as missing data can result in biased decision making down the line. This study creates a guideline for imputing the gaps in BMS datasets by comparing four methods: K Nearest Neighbour algorithm (KNN), Recurrent Neural Network (RNN), Hot Deck (HD) and Last Observation Carried Forward (LOCF). The guideline contains the best method per gap size and scales of measurement. The four selected methods are from various backgrounds and are tested on a real BMS and metereological dataset. The focus of this paper is not to impute every cell as accurately as possible but to impute trends back into the missing data. The performance is characterised by a set of criteria in order to allow the user to choose the imputation method best suited for its needs. The criteria are: Variance Error (VE) and Root Mean Squared Error (RMSE). VE has been given more weight as its ability to evaluate the imputed trend is better than RMSE. From preliminary results, it was concluded that the best K‐values for KNN are 5 for the smallest gap and 100 for the larger gaps. Using a genetic algorithm the best RNN architecture for the purpose of this paper was determined to be GatedRecurrent Units (GRU). The comparison was performed using a different training dataset than the imputation dataset. The results show no consistent link between the difference in Kurtosis or Skewness and imputation performance. The results of the experiment concluded that RNN is best for interval data and HD is best for both nominal and ratio data. There was no single method that was best for all gap sizes as it was dependent on the data to be imputed.
MULTIFILE
Over the past few years, there has been an explosion of data science as a profession and an academic field. The increasing impact and societal relevance of data science is accompanied by important questions that reflect this development: how can data science become more responsible and accountable while also responding to key challenges such as bias, fairness, and transparency in a rigorous and systematic manner? This Patterns special collection has brought together research and perspective from academia, the public and the private sector, showcasing original research articles and perspectives pertaining to responsible and accountable data science.
MULTIFILE
Citizens regularly search the Web to make informed decisions on daily life questions, like online purchases, but how they reason with the results is unknown. This reasoning involves engaging with data in ways that require statistical literacy, which is crucial for navigating contemporary data. However, many adults struggle to critically evaluate and interpret such data and make data-informed decisions. Existing literature provides limited insight into how citizens engage with web-sourced information. We investigated: How do adults reason statistically with web-search results to answer daily life questions? In this case study, we observed and interviewed three vocationally educated adults searching for products or mortgages. Unlike data producers, consumers handle pre-existing, often ambiguous data with unclear populations and no single dataset. Participants encountered unstructured (web links) and structured data (prices). We analysed their reasoning and the process of preparing data, which is part of data-ing. Key data-ing actions included judging relevance and trustworthiness of the data and using proxy variables when relevant data were missing (e.g., price for product quality). Participants’ statistical reasoning was mainly informal. For example, they reasoned about association but did not calculate a measure of it, nor assess underlying distributions. This study theoretically contributes to understanding data-ing and why contemporary data may necessitate updating the investigative cycle. As current education focuses mainly on producers’ tasks, we advocate including consumers’ tasks by using authentic contexts (e.g., music, environment, deferred payment) to promote data exploration, informal statistical reasoning, and critical web-search skills—including selecting and filtering information, identifying bias, and evaluating sources.
LINK
How come Open Science is a well-shared vision among research communities, while the prerequisite practice of research data management (RDM) is lagging? This research sheds light on RDM adoption in the Dutch context of universities of applied sciences, by studying influencing technological, organizational, and environmental factors using the TOE-framework. A survey was sent out to researchers of universities of applied sciences in the Netherlands. The analyses thereof showed no significant relation between the influencing factors and the intention to comply with the RDM guidelines (p-value of ≤ .10 and a 90% confidence level). Results did show a significant influence of the factor Management Support towards compliance with a p-value of 0.078. This research contributes towards the knowledge on RDM adoption with the new insight that the factors used in this research do not seem to significantly influence RDM adoption in the Dutch context of universities of applied sciences. The research does show that the respondents have a positive attitude in their intention to change, increase or invest time and effort towards RDM compliance. More research is advised to uncover factors that do significantly influence RDM adoption among universities of applied sciences in the Netherland for stakeholders in Open Science and RDM to enhance their strategies.
MULTIFILE
To make effective financial decisions, individuals need both financial and numerical competence. The latter includes having numerical knowledge and skills, and the ability to apply them in a financial context. A positive attitude towards numbers, combined with the absence of math anxiety, proves beneficial. Additionally, higher-order numerical skills enhance the quality of financial decision-making. Challenges in any of these numeracy aspects may contribute to financial difficulties. However, the specific aspects of numeracy that are of crucial importance remain unclear. Therefore, our research addresses the question: Which aspects of numeracy are related to having financial problems? In this article, we explore this question through a literature review.
DOCUMENT
INTRODUCTION: In the Netherlands, Diagnostic Reference Levels (DRLs) have not been based on a national survey as proposed by ICRP. Instead, local exposure data, expert judgment and the international scientific literature were used as sources. This study investigated whether the current DRLs are reasonable for Dutch radiological practice.METHODS: A national project was set up, in which radiography students carried out dose measurements in hospitals supervised by medical physicists. The project ran from 2014 to 2017 and dose values were analysed for a trend over time. In the absence of such a trend, the joint yearly data sets were considered a single data set and were analysed together. In this way the national project mimicked a national survey.RESULTS: For six out of eleven radiological procedures enough data was collected for further analysis. In the first step of the analysis no trend was found over time for any of these procedures. In the second step the joint analysis lead to suggestions for five new DRL values that are far below the current ones. The new DRLs are based on the 75 percentile values of the distributions of all dose data per procedure.CONCLUSION: The results show that the current DRLs are too high for five of the six procedures that have been analysed. For the other five procedures more data needs to be collected. Moreover, the mean weights of the patients are higher than expected. This introduces bias when these are not recorded and the mean weight is assumed to be 77 kg.IMPLICATIONS FOR PRACTICE: The current checking of doses for compliance with the DRLs needs to be changed. Both the procedure (regarding weights) and the values of the DRLs should be updated.
DOCUMENT
As every new generation of civil aircraft creates more on-wing data and fleets gradually become more connected with the ground, an increased number of opportunities can be identified for more effective Maintenance, Repair and Overhaul (MRO) operations. Data are becoming a valuable asset for aircraft operators. Sensors measure and record thousands of parameters in increased sampling rates. However, data do not serve any purpose per se. It is the analysis that unleashes their value. Data analytics methods can be simple, making use of visualizations, or more complex, with the use of sophisticated statistics and Artificial Intelligence algorithms. Every problem needs to be approached with the most suitable and less complex method. In MRO operations, two major categories of on-wing data analytics problems can be identified. The first one requires the identification of patterns, which enable the classification and optimization of different maintenance and overhaul processes. The second category of problems requires the identification of rare events, such as the unexpected failure of parts. This cluster of problems relies on the detection of meaningful outliers in large data sets. Different Machine Learning methods can be suggested here, such as Isolation Forest and Logistic Regression. In general, the use of data analytics for maintenance or failure prediction is a scientific field with a great potentiality. Due to its complex nature, the opportunities for aviation Data Analytics in MRO operations are numerous. As MRO services focus increasingly in long term contracts, maintenance organizations with the right forecasting methods will have an advantage. Data accessibility and data quality are two key-factors. At the same time, numerous technical developments related to data transfer and data processing can be promising for the future.
DOCUMENT