The Technical Manual for the digital evaluation tool QualiTePE supports users of the QualiTePE tool in creating, conducting and analysing evaluations to record the quality of teaching in physical education. The information on the General Data Protection Regulation (GDPR) instructs users on how to anonymise the data collection of evaluations and which legal bases apply with regard to the collection of personal data. The technical manual for the digital evaluation tool QualiTePE and the information on the General Data Protection Regulation (GDPR) are available in English, German, French, Italian, Spanish, Dutch, Swedish, Slovenian, Czech and Greek.
DOCUMENT
Veel overheden en bedrijven denken dat het met de nieuwe privacywet (GDPR) niet zo’n vaart zal lopen. Maar weten zij waar ze alle data bewaren? En wat ermee gebeurt? Het rondslingeren van data is een groot probleem, zeggen Hans Henseler en Geert-Jan van Bussel. Bedrijven en overheden hebben hun information governance en informatiewaardeketen niet onder controle.
DOCUMENT
Design and development practitioners such as those in game development often have difficulty comprehending and adhering to the European General Data Protection Regulation (GDPR), especially when designing in a private sensitive way. Inadequate understanding of how to apply the GDPR in the game development process can lead to one of two consequences: 1. inadvertently violating the GDPR with sizeable fines as potential penalties; or 2. avoiding the use of user data entirely. In this paper, we present our work on designing and evaluating the “GDPR Pitstop tool”, a gamified questionnaire developed to empower game developers and designers to increase legal awareness of GDPR laws in a relatable and accessible manner. The GDPR Pitstop tool was developed with a user-centered approach and in close contact with stakeholders, including practitioners from game development, legal experts and communication and design experts. Three design choices worked for this target group: 1. Careful crafting of the language of the questions; 2. a flexible structure; and 3. a playful design. By combining these three elements into the GDPR Pitstop tool, GDPR awareness within the gaming industry can be improved upon and game developers and designers can be empowered to use user data in a GDPR compliant manner. Additionally, this approach can be scaled to confront other tricky issues faced by design professionals such as privacy by design.
LINK
Data collected from fitness trackers worn by employees could be very useful for businesses. The sharing of this data with employers is already a well-established practice in the United States, and companies in Europe are showing an interest in the introduction of such devices among their workforces. Our argument is that employers processing their employees’ fitness trackers data is unlikely to be lawful under the General Data Protection Regulation (GDPR). Wearable fitness trackers, such as Fitbit and AppleWatch devices, collate intimate data about the wearer’s location, sleep and heart rate. As a result, we consider that they not only represent a novel threat to the privacy and autonomy of the wearer, but that the data gathered constitutes ‘health data’ regulated by Article 9. Processing health data, including, in our view, fitness tracking data, is prohibited unless one of the specified conditions in the GDPR applies. After examining a number of legitimate bases which employers can rely on, we conclude that the data processing practices considered do not comply with the principle of lawfulness that is central to the GDPR regime. We suggest alternative schema by which wearable fitness trackers could be integrated into an organization to support healthy habits amongst employees, but in a manner that respects the data privacy of the individual wearer.
MULTIFILE
Werkgevers in de Europese Unie gebruiken draagbare technologie, zoals smartphones, om er preventief voor te zorgen dat hun bedrijf geen besmettingshaard wordt. Door bron- en contactonderzoek en tracking van werknemers proberen zij dat te bereiken. Maar hoe zit dit in relatie tot de Europese AVG-regels? Stefania Marassi van de Haagse Hogeschool stelt dat er strikte voorwaarden moeten zijn om deze technologie te mogen gebruiken.
MULTIFILE
The American company Amazon has made headlines several times for monitoring its workers in warehouses across Europe and beyond.1 What is new is that a national data protection authority has recently issued a substantial fine of €32 million to the e-commerce giant for breaching several provisions of the General Data Protection Regulation (gdpr) with its surveillance practices. On 27 December 2023, the Commission nationale de l’informatique et des libertés (cnil)—the French Data Protection Authority—determined that Amazon France Logistique infringed on, among others, Articles 6(1)(f) (principle of lawfulness) and 5(1)(c) (data minimization) gdpr by processing some of workers’ data collected by handheld scanner in the distribution centers of Lauwin-Planque and Montélimar.2 Scanners enable employees to perform direct tasks such as picking and scanning items while continuously collecting data on quality of work, productivity, and periods of inactivity.3 According to the company, this data processing is necessary for various purposes, including quality and safety in warehouse management, employee coaching and performance evaluation, and work planning.4 The cnil’s decision centers on data protection law, but its implications reach far beyond into workers’ fundamental right to health and safety at work. As noted in legal literature and policy documents, digital surveillance practices can have a significant impact on workers’ mental health and overall well-being.5 This commentary examines the cnil’s decision through the lens of European occupational health and safety (EU ohs). Its scope is limited to how the French authority has interpreted the data protection principle of lawfulness taking into account the impact of some of Amazon’s monitoring practices on workers’ fundamental right to health and safety.
MULTIFILE
In this project we take a look at the laws and regulations surrounding data collection using sensors in assistive technology and the literature on concerns of people about this technology. We also look into the Smart Teddy device and how it operates. An analysis required by the General Data Protection Regulation (GDPR) [5] will reveal the risks in terms of privacy and security in this project and how to mitigate them. https://nl.linkedin.com/in/haniers
MULTIFILE
This guide was developed for designers and developers of AI systems, with the goal of ensuring that these systems are sufficiently explainable. Sufficient here means that it meets the legal requirements from AI Act and GDPR and that users can use the system properly. Explainability of decisions is an important requirement in many systems and even an important principle for AI systems [HLEG19]. In many AI systems, explainability is not self-evident. AI researchers expect that the challenge of making AI explainable will only increase. For one thing, this comes from the applications: AI will be used more and more often, for larger and more sensitive decisions. On the other hand, organizations are making better and better models, for example, by using more different inputs. With more complex AI models, it is often less clear how a decision was made. Organizations that will deploy AI must take into account users' need for explanations. Systems that use AI should be designed to provide the user with appropriate explanations. In this guide, we first explain the legal requirements for explainability of AI systems. These come from the GDPR and the AI Act. Next, we explain how AI is used in the financial sector and elaborate on one problem in detail. For this problem, we then show how the user interface can be modified to make the AI explainable. These designs serve as prototypical examples that can be adapted to new problems. This guidance is based on explainability of AI systems for the financial sector. However, the advice can also be used in other sectors.
DOCUMENT
Mensen met een lage sociaaleconomische status en/of een niet-Nederlandse achtergrond hebben een groter risico op leefstijlgerelateerde aandoeningen als hart- en vaatziekten en diabetes; leefstijlinterventie is vaak geïndiceerd. Om hen te ondersteunen bij gedragsverandering kunnen mHealth-toepassingen (gezondheidsapplicaties voor telefoon of tablet) mogelijkheden bieden. Voorwaarde is wel dat de toepassing persoonlijk is afgestemd op de doelgroep.1 In deze kwalitatieve studie is onderzocht welke mogelijkheden Hindoestanen zien voor het gebruik van dergelijke mHealth-toepassingen. Deze doelgroep heeft van nature een verhoogd risicoprofiel voor leefstijlgerelateerde ziekten. https://ntvd.media/artikelen/mhealth-voor-leefstijlinterventie-bij-hindoestanen/ Linkedin: https://www.linkedin.com/in/luka-van-der-veken-ab5532188/ https://www.linkedin.com/in/machteldvanlieshout/ https://www.linkedin.com/in/jacqueline-langius/
MULTIFILE
ABSTRACT Purpose: This short paper describes the dashboard design process for online hate speech monitoring for multiple languages and platforms. Methodology/approach: A case study approach was adopted in which the authors followed a research & development project for a multilingual and multiplatform online dashboard monitoring online hate speech. The case under study is the project for the European Observatory of Online Hate (EOOH). Results: We outline the process taken for design and prototype development for which a design thinking approach was followed, including multiple potential user groups of the dashboard. The paper presents this process's outcome and the dashboard's initial use. The identified issues, such as obfuscation of the context or identity of user accounts of social media posts limiting the dashboard's usability while providing a trade-off in privacy protection, may contribute to the discourse on privacy and data protection in (big data) social media analysis for practitioners. Research limitations/implications: The results are from a single case study. Still, they may be relevant for other online hate speech detection and monitoring projects involving big data analysis and human annotation. Practical implications: The study emphasises the need to involve diverse user groups and a multidisciplinary team in developing a dashboard for online hate speech. The context in which potential online hate is disseminated and the network of accounts distributing or interacting with that hate speech seems relevant for analysis by a part of the user groups of the dashboard. International Information Management Association
LINK