The number of Electric Vehicles (EVs) is expected to increase exponentially in the coming years. The growing presence of charging points generates a multitude of interactions between EV users, particularly in metropolitan areas where a charging infrastructure is largely part of the public domain. There is a current knowledge gap as to how current decisions on charging infrastructure deployment affect both current and future infrastructure performance. In the thesis an attempt is made to bridge this knowledge gap by creating a deeper understanding of the relation between charging behavior, charging infrastructure deployment, and performance.The results demonstrate shown how both strategic and demand-drive deployment strategies have an effect on performance metrics. In a case study in the Netherlands it was found that during the initial deployment phase, strategic Charging Points (CPs) facilitate EV users better than demand driven deployment. As EV user adoption increased, demand-driven CPs show to outperform strategic CPs.This thesis further shows that there are 9 EV user types each with distinct difference distinct behavior in terms of charging frequency and mean energy uptake, both of which relate to aggregate CP performance and that user type composition, interactions between users and battery size play an important role in explaining performance of charging infrastructure.A validated data-driven agent-based model was developed to explore effects of interactions in the EV system and how they influence performance. The simulation results demonstrate that there is a non-linear relation between system utilization and inconvenience even at the base case scenario. Also, a significant rise of EV user population will lead to an occupancy of non-habitual charging at the expense of habitual EV users, which leads to an expected decline of occupancy for habitual EV users.Additional simulations studies support the hypothesis that several Complex Systems properties are currently present and affecting the relation between performance and occupation.
Trustworthy data-driven prognostics in gas turbine engines are crucial for safety, cost-efficiency, and sustainability. Accurate predictions depend on data quality, model accuracy, uncertainty estimation, and practical implementation. This work discusses data quality attributes to build trust using anonymized real-world engine data, focusing on traceability, completeness, and representativeness. A significant challenge is handling missing data, which introduces bias and affects training and predictions. The study compares the accuracy of predictions using Exhaust Gas Temperature (EGT) margin, a key health indicator, by keeping missing values, using KNN-imputation, and employing a Generalized Additive Model (GAM). Preliminary results indicate that while KNN-imputation can be useful for identifying general trends, it may not be as effective for specific predictions compared to GAM, which considers the context of missing data. The choice of method depends on the study’s objective: broad trend forecasting or specific event prediction, each requiring different approaches to manage missing data.
During the past two decades the implementation and adoption of information technology has rapidly increased. As a consequence the way businesses operate has changed dramatically. For example, the amount of data has grown exponentially. Companies are looking for ways to use this data to add value to their business. This has implications for the manner in which (financial) governance needs to be organized. The main purpose of this study is to obtain insight in the changing role of controllers in order to add value to the business by means of data analytics. To answer the research question a literature study was performed to establish a theoretical foundation concerning data analytics and its potential use. Second, nineteen interviews were conducted with controllers, data scientists and academics in the financial domain. Thirdly, a focus group with experts was organized in which additional data were gathered. Based on the literature study and the participants responses it is clear that the challenge of the data explosion consist of converting data into information, knowledge and meaningful insights to support decision-making processes. Performing data analyses enables the controller to support rational decision making to complement the intuitive decision making by (senior) management. In this way, the controller has the opportunity to be in the lead of the information provision within an organization. However, controllers need to have more advanced data science and statistic competences to be able to provide management with effective analysis. Specifically, we found that an important skill regarding statistics is the visualization and communication of statistical analysis. This is needed for controllers in order to grow in their role as business partner..
The focus of this project is on improving the resilience of hospitality Small and Medium Enterprises (SMEs) by enabling them to take advantage of digitalization tools and data analytics in particular. Hospitality SMEs play an important role in their local community but are vulnerable to shifts in demand. Due to a lack of resources (time, finance, and sometimes knowledge), they do not have sufficient access to data analytics tools that are typically available to larger organizations. The purpose of this project is therefore to develop a prototype infrastructure or ecosystem showcasing how Dutch hospitality SMEs can develop their data analytic capability in such a way that they increase their resilience to shifts in demand. The one year exploration period will be used to assess the feasibility of such an infrastructure and will address technological aspects (e.g. kind of technological platform), process aspects (e.g. prerequisites for collaboration such as confidentiality and safety of data), knowledge aspects (e.g. what knowledge of data analytics do SMEs need and through what medium), and organizational aspects (what kind of cooperation form is necessary and how should it be financed).
Due to societal developments, like the introduction of the ‘civil society’, policy stimulating longer living at home and the separation of housing and care, the housing situation of older citizens is a relevant and pressing issue for housing-, governance- and care organizations. The current situation of living with care already benefits from technological advancement. The wide application of technology especially in care homes brings the emergence of a new source of information that becomes invaluable in order to understand how the smart urban environment affects the health of older people. The goal of this proposal is to develop an approach for designing smart neighborhoods, in order to assist and engage older adults living there. This approach will be applied to a neighborhood in Aalst-Waalre which will be developed into a living lab. The research will involve: (1) Insight into social-spatial factors underlying a smart neighborhood; (2) Identifying governance and organizational context; (3) Identifying needs and preferences of the (future) inhabitant; (4) Matching needs & preferences to potential socio-techno-spatial solutions. A mixed methods approach fusing quantitative and qualitative methods towards understanding the impacts of smart environment will be investigated. After 12 months, employing several concepts of urban computing, such as pattern recognition and predictive modelling , using the focus groups from the different organizations as well as primary end-users, and exploring how physiological data can be embedded in data-driven strategies for the enhancement of active ageing in this neighborhood will result in design solutions and strategies for a more care-friendly neighborhood.
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.