Enhancing communication performance skills may help children with Down Syndrome (DS) to expand their opportunities for participation in daily life. It is a clinical challenge for speech-language pathologists (SLP) to disentangle various mechanisms that contribute to the language and communication problems that children with DS encounter. Without clarity of different levels of functioning, appropriate interventions may be poorly conceived or improperly implemented. In the present study, the International Classification of Functioning, Disability and Health – Children and Youth Version (ICF-CY) framework was used to classify contributing factors to communication performance in a multiple case study of six young children with DS. Within a comprehensive assessment, we identified individual and environmental facilitators and barriers, leading to an integrative profile of communication performance (IPCP) for each child. Whereas these six children shared a developmental, and/or expressive vocabulary age and/or level of communicative intent, the children faced similar but also unique personal and environmental factors that play an important role in their communication performance. Our data reveal that a combination of different factors may lead to the same language outcomes and vice versa, based on a unique pattern of interdependency of ICF-CY domains. Planning SLP interventions for enhancing communication performance in children with DS should therefore be based on a comprehensive view on the competences and limitations of every individual child and its significant communication partners. This evaluation should address facilitators and barriers in body functions, structures, activities, participation and environment, with a specific focus on individual strengths. The ICF-CY provides a useful framework for constructing an IPCP that serves this purpose.
Paper describes an experiment within the Digital Society Hub with students of the master International Communication, and IBM into big data applications within the communication profession
During the past two decades the implementation and adoption of information technology has rapidly increased. As a consequence the way businesses operate has changed dramatically. For example, the amount of data has grown exponentially. Companies are looking for ways to use this data to add value to their business. This has implications for the manner in which (financial) governance needs to be organized. The main purpose of this study is to obtain insight in the changing role of controllers in order to add value to the business by means of data analytics. To answer the research question a literature study was performed to establish a theoretical foundation concerning data analytics and its potential use. Second, nineteen interviews were conducted with controllers, data scientists and academics in the financial domain. Thirdly, a focus group with experts was organized in which additional data were gathered. Based on the literature study and the participants responses it is clear that the challenge of the data explosion consist of converting data into information, knowledge and meaningful insights to support decision-making processes. Performing data analyses enables the controller to support rational decision making to complement the intuitive decision making by (senior) management. In this way, the controller has the opportunity to be in the lead of the information provision within an organization. However, controllers need to have more advanced data science and statistic competences to be able to provide management with effective analysis. Specifically, we found that an important skill regarding statistics is the visualization and communication of statistical analysis. This is needed for controllers in order to grow in their role as business partner..
In the past decades, we have faced an increase in the digitization, digitalization, and digital transformation of our work and daily life. Breakthroughs of digital technologies in fields such as artificial intelligence, telecommunications, and data science bring solutions for large societal questions but also pose a new challenge: how to equip our (future)workforce with the necessary digital skills, knowledge, and mindset to respond to and drive digital transformation?Developing and supporting our human capital is paramount and failure to do so may leave us behind on individual (digital divide), organizational (economic disadvantages), and societal level (failure in addressing grand societal challenges). Digital transformation necessitates continuous learning approaches and scaffolding of interdisciplinary collaboration and innovation practices that match complex real-world problems. Research and industry have advocated for setting up learning communities as a space in which (future) professionals of different backgrounds can work, learn, and innovate together. However, insights into how and under which circumstances learning communities contribute to accelerated learning and innovation for digital transformation are lacking. In this project, we will study 13 existing and developing learning communities that work on challenges related to digital transformation to understand their working mechanisms. We will develop a wide variety of methods and tools to support learning communities and integrate these in a Learning Communities Incubator. These insights, methods and tools will result in more effective learning communities that will eventually (a) increase the potential of human capital to innovate and (b) accelerate the innovation for digital transformation
Today, embedded devices such as banking/transportation cards, car keys, and mobile phones use cryptographic techniques to protect personal information and communication. Such devices are increasingly becoming the targets of attacks trying to capture the underlying secret information, e.g., cryptographic keys. Attacks not targeting the cryptographic algorithm but its implementation are especially devastating and the best-known examples are so-called side-channel and fault injection attacks. Such attacks, often jointly coined as physical (implementation) attacks, are difficult to preclude and if the key (or other data) is recovered the device is useless. To mitigate such attacks, security evaluators use the same techniques as attackers and look for possible weaknesses in order to “fix” them before deployment. Unfortunately, the attackers’ resourcefulness on the one hand and usually a short amount of time the security evaluators have (and human errors factor) on the other hand, makes this not a fair race. Consequently, researchers are looking into possible ways of making security evaluations more reliable and faster. To that end, machine learning techniques showed to be a viable candidate although the challenge is far from solved. Our project aims at the development of automatic frameworks able to assess various potential side-channel and fault injection threats coming from diverse sources. Such systems will enable security evaluators, and above all companies producing chips for security applications, an option to find the potential weaknesses early and to assess the trade-off between making the product more secure versus making the product more implementation-friendly. To this end, we plan to use machine learning techniques coupled with novel techniques not explored before for side-channel and fault analysis. In addition, we will design new techniques specially tailored to improve the performance of this evaluation process. Our research fills the gap between what is known in academia on physical attacks and what is needed in the industry to prevent such attacks. In the end, once our frameworks become operational, they could be also a useful tool for mitigating other types of threats like ransomware or rootkits.
The bi-directional communication link with the physical system is one of the main distinguishing features of the Digital Twin paradigm. This continuous flow of data and information, along its entire life cycle, is what makes a Digital Twin a dynamic and evolving entity and not merely a high-fidelity copy. There is an increasing realisation of the importance of a well functioning digital twin in critical infrastructures, such as water networks. Configuration of water network assets, such as valves, pumps, boosters and reservoirs, must be carefully managed and the water flows rerouted, often manually, which is a slow and costly process. The state of the art water management systems assume a relatively static physical model that requires manual corrections. Any change in the network conditions or topology due to degraded control mechanisms, ongoing maintenance, or changes in the external context situation, such as a heat wave, makes the existing model diverge from the reality. Our project proposes a unique approach to real-time monitoring of the water network that can handle automated changes of the model, based on the measured discrepancy of the model with the obtained IoT sensor data. We aim at an evolutionary approach that can apply detected changes to the model and update it in real-time without the need for any additional model validation and calibration. The state of the art deep learning algorithms will be applied to create a machine-learning data-driven simulation of the water network system. Moreover, unlike most research that is focused on detection of network problems and sensor faults, we will investigate the possibility of making a step further and continue using the degraded network and malfunctioning sensors until the maintenance and repairs can take place, which can take a long time. We will create a formal model and analyse the effect on data readings of different malfunctions, to construct a mitigating mechanism that is tailor-made for each malfunction type and allows to continue using the data, albeit in a limited capacity.