In dit rapport wordt beschreven op welke wijze de onderzoekservaringen en discourse analytische (DA) resultaten uit het onderzoek The Next Level (TNL) geïmplementeerd en verspreid kunnen worden in de praktijk, en dan met name in de toepassing en ontwikkeling van social media monitoring tools.
DOCUMENT
Parents who grew up without digital monitoring have a plethora of parental monitoring opportunities at their disposal. While they can engage in surveillance practices to safeguard their children, they also have to balance freedom against control. This research is based on in-depth interviews with eleven early adolescents and eleven parents to investigate everyday negotiations of parental monitoring. Parental monitoring is presented as a form of lateral surveillance because it entails parents engaging in surveillance practices to monitor their children. The results indicate that some parents are motivated to use digital monitoring tools to safeguard and guide their children, while others refrain from surveillance practices to prioritise freedom and trust. The most common forms of surveillance are location tracking and the monitoring of digital behaviour and screen time. Moreover, we provide unique insights into the use of student tracking systems as an impactful form of control. Early adolescents negotiate these parental monitoring practices, with responses ranging from acceptance to active forms of resistance. Some children also monitor their parents, showcasing a reciprocal form of lateral surveillance. In all families, monitoring practices are negotiated in open conversations that also foster digital resilience. This study shows that the concepts of parental monitoring and lateral surveillance fall short in grasping the reciprocal character of monitoring and the power dynamics in parent-child relations. We therefore propose that monitoring practices in families can best be understood as family surveillance, providing a novel concept to understand how surveillance is embedded in contemporary media practices among interconnected family members.
MULTIFILE
Living labs are complex multi-stakeholder collaborations that often employ a usercentred and design-driven methodology to foster innovation. Conventional management tools fall short in evaluating them. However, some methods and tools dedicated to living labs' special characteristics and goals have already been developed. Most of them are still in their testing phase. Those tools are not easily accessible and can only be found in extensive research reports, which are difficult to dissect. Therefore, this paper reviews seven evaluation methods and tools specially developed for living labs. Each section of this paper is structured in the following manner: tool’s introduction (1), who uses the tool (2), and how it should be used (3). While the first set of tools, namely “ENoLL 20 Indicators”, “SISCODE Self-assessment”, and “SCIROCCO Exchange Tool” assess a living lab as an organisation and are diving deeper into the organisational activities and the complex context, the second set of methods and tools, “FormIT” and “Living Lab Markers”, evaluate living labs’ methodologies: the process they use to come to innovations. The paper's final section presents “CheRRIes Monitoring and Evaluation Tool” and “TALIA Indicator for Benchmarking Service for Regions”, which assess the regional impact made by living labs. As every living lab is different regarding its maturity (as an organisation and in its methodology) and the scope of impact it wants to make, the most crucial decision when evaluating is to determine the focus of the assessment. This overview allows for a first orientation on worked-out methods and on possible indicators to use. It also concludes that the existing tools are quite managerial in their method and aesthetics and calls for designers and social scientists to develop more playful, engaging and (possibly) learning-oriented tools to evaluate living labs in the future. LinkedIn: https://www.linkedin.com/in/overdiek12345/ https://www.linkedin.com/in/mari-genova-17a727196/?originalSubdomain=nl
DOCUMENT
In het project werken onderzoekers van het Lectoraat samen met publieke organisaties toe naar een tool waarmee onderstromen in het publieke debat rondom issues eerder kunnen worden opgemerkt. We exploreren met welk algoritme we patronen in geruchtvorming en mobilisatie kunnen opsporen, en tevens hoe we de interactie tussen newsroom-analisten en de output van een monitoring tool het beste kunnen vormgeven.Doel Het doel van dit project is een brede en structureel toepasbare aanpak van het issuemanagement: Hoe kunnen de communicatieprofessionals van publieke organisaties potentiële issues op sociale media vroegtijdig opmerken? Resultaten We willen dit bereiken door enerzijds kennis en inzicht te vergaren en anderzijds de uitkomsten daarvan voor publieke organisaties te vertalen in praktische handgrepen: tools, handleiding, training. Looptijd 01 oktober 2022 - 30 september 2024 Aanpak Via cases ingebracht door de praktijkpartners en focusgroepen staan we in nauw contact met het consortium. In de eerste werkpakketten onderzoeken we de verschillende cases aan de hand van discoursanalyse. De inzichten die we hierbij opdoen, gebruiken we vervolgens om te bekijken hoe we de interactie tussen mens en machine het beste kunnen vormgeven en wel zo dat er ten behoeve van de communicatie en het management van issues via interactieve visualisaties steeds weer triggers afgegeven worden. Op basis van de opgedane inzichten richten we een interface in. Deze maakt het analisten en communicatieprofessionals mogelijk om vroegtijdig issues te signaleren.
In het project werken onderzoekers van het Lectoraat samen met publieke organisaties toe naar een tool waarmee onderstromen in het publieke debat rondom issues eerder kunnen worden opgemerkt. We exploreren met welk algoritme we patronen in geruchtvorming en mobilisatie kunnen opsporen, en tevens hoe we de interactie tussen newsroom-analisten en de output van een monitoring tool het beste kunnen vormgeven.
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.