According to Johnson & Grandison (2007), failure to safeguard privacy of users of services provided by private and governmental organisations, leaves individuals with the risk of exposure to a number of undesirable effects of information processing. Loss of control over information about a person may lead to fraud, identity theft, reputation damage, and may cause psychosocial consequences ranging from mild irritation, unease, social exclusion, physical harm or even, in extreme cases, death. Although pooh-poohed upon by some opinion leaders from search engine and ICT industries for over a decade (Sprenger, 1999; Esguerra, 2009), the debate in the wake of events like the tragic case of Amanda Todd could be interpreted as supporting a case for proper attention to citizens’ privacy. Truth be told, for a balanced discussion on privacy in the age of Facebook one should not turn towards the social media environment that seems to hail any new development in big data analysis and profiling-based marketing as a breathtaking innovation. If the myopic view of technology pundits is put aside, a remarkably lively debate on privacy and related issues may be discerned in both media and scientific communities alike. A quick keyword search on ‘privacy’, limited to the years 2000-2015, yields huge numbers of publications: Worldcat lists 19,240; Sciencedirect 52,566, IEEE explore 71,684 and Google scholar a staggering 1,880,000. This makes clear that privacy is still a concept considered relevant by both the general public and academic and professional audiences. Quite impressive for a subject area that has been declared ‘dead’.
MULTIFILE
This paper argues online privacy controls are based on a transactional model of privacy, leading to a collective myth of consensual data practices. It proposes an alternative based on the notion of privacy coordination as an alternative vision and realizing this vision as a grand challenge in Ethical UX
DOCUMENT
In this paper we explore the extent to which privacy enhancing technologies (PETs) could be effective in providing privacy to citizens. Rapid development of ubiquitous computing and ‘the internet of things’ are leading to Big Data and the application of Predictive Analytics, effectively merging the real world with cyberspace. The power of information technology is increasingly used to provide personalised services to citizens, leading to the availability of huge amounts of sensitive data about individuals, with potential and actual privacy-eroding effects. To protect the private sphere, deemed essential in a state of law, information and communication systems (ICTs) should meet the requirements laid down in numerous privacy regulations. Sensitive personal information may be captured by organizations, provided that the person providing the information consents to the information being gathered, and may only be used for the express purpose the information was gathered for. Any other use of information about persons without their consent is prohibited by law; notwithstanding legal exceptions. If regulations are properly translated into written code, they will be part of the outcomes of an ICT, and that ICT will therefore be privacy compliant. We conclude that privacy compliance in the ‘technological’ sense cannot meet citizens’ concerns completely, and should therefore be augmented by a conceptual model to make privacy impact assessments at the level of citizens’ lives possible.
DOCUMENT
In this project we take a look at the laws and regulations surrounding data collection using sensors in assistive technology and the literature on concerns of people about this technology. We also look into the Smart Teddy device and how it operates. An analysis required by the General Data Protection Regulation (GDPR) [5] will reveal the risks in terms of privacy and security in this project and how to mitigate them. https://nl.linkedin.com/in/haniers
MULTIFILE
Following the rationale of the current EU legal framework protecting personal data, children are entitled to the same privacy and data protection rights as adults. However, the child, because of his physical and mental immaturity, needs special safeguards and care, including appropriate legal protection. In the online environment, children are less likely to make any checks or judgments before entering personal information. Therefore, this paper presents an analysis of the extent to which EU regulation can ensure children’s online privacy and data protection.
DOCUMENT
Data collected from fitness trackers worn by employees could be very useful for businesses. The sharing of this data with employers is already a well-established practice in the United States, and companies in Europe are showing an interest in the introduction of such devices among their workforces. Our argument is that employers processing their employees’ fitness trackers data is unlikely to be lawful under the General Data Protection Regulation (GDPR). Wearable fitness trackers, such as Fitbit and AppleWatch devices, collate intimate data about the wearer’s location, sleep and heart rate. As a result, we consider that they not only represent a novel threat to the privacy and autonomy of the wearer, but that the data gathered constitutes ‘health data’ regulated by Article 9. Processing health data, including, in our view, fitness tracking data, is prohibited unless one of the specified conditions in the GDPR applies. After examining a number of legitimate bases which employers can rely on, we conclude that the data processing practices considered do not comply with the principle of lawfulness that is central to the GDPR regime. We suggest alternative schema by which wearable fitness trackers could be integrated into an organization to support healthy habits amongst employees, but in a manner that respects the data privacy of the individual wearer.
MULTIFILE
Design and development practitioners such as those in game development often have difficulty comprehending and adhering to the European General Data Protection Regulation (GDPR), especially when designing in a private sensitive way. Inadequate understanding of how to apply the GDPR in the game development process can lead to one of two consequences: 1. inadvertently violating the GDPR with sizeable fines as potential penalties; or 2. avoiding the use of user data entirely. In this paper, we present our work on designing and evaluating the “GDPR Pitstop tool”, a gamified questionnaire developed to empower game developers and designers to increase legal awareness of GDPR laws in a relatable and accessible manner. The GDPR Pitstop tool was developed with a user-centered approach and in close contact with stakeholders, including practitioners from game development, legal experts and communication and design experts. Three design choices worked for this target group: 1. Careful crafting of the language of the questions; 2. a flexible structure; and 3. a playful design. By combining these three elements into the GDPR Pitstop tool, GDPR awareness within the gaming industry can be improved upon and game developers and designers can be empowered to use user data in a GDPR compliant manner. Additionally, this approach can be scaled to confront other tricky issues faced by design professionals such as privacy by design.
LINK
Firms increasingly use social network sites to reach out to customers and proactively intervene with observed consumer messages. Despite intentions to enhance customer satisfaction by extending customer service, sometimes these interventions are received negatively by consumers. We draw on privacy regulation theory to theorize how proactive customer service interventions with consumer messages on social network sites may evoke feelings of privacy infringement. Subsequently we use privacy calculus theory to propose how these perceptions of privacy infringement, together with the perceived usefulness of the intervention, in turn drive customer satisfaction. In two experiments, we find that feelings of privacy infringement associated with proactive interventions may explain why only reactive interventions enhance customer satisfaction. Moreover, we find that customer satisfaction can be modeled through the calculus of the perceived usefulness and feelings of privacy infringement associated with an intervention. These findings contribute to a better understanding of the impact of privacy concerns on consumer behavior in the context of firm–consumer interactions on social network sites, extend the applicability of privacy calculus theory, and contribute to complaint and compliment management literature. To practitioners, our findings demonstrate that feelings of privacy are an element to consider when handling consumer messages on social media, but also that privacy concerns may be overcome if an intervention is perceived as useful enough.
MULTIFILE
Human behaviour change is necessary to meet targets set by the Paris Agreement to mitigate climate change. Restrictions and regulations put in place globally to mitigate the spread of COVID-19 during 2020 have had a substantial impact on everyday life, including many carbon-intensive behaviours such as transportation. Changes to transportation behaviour may reduce carbon emissions. Behaviour change theory can offer perspective on the drivers and influences of behaviour and shape recommendations for how policy-makers can capitalise on any observed behaviour changes that may mitigate climate change. For this commentary, we aimed to describe changes in data relating to transportation behaviours concerning working from home during the COVID-19 pandemic across the Netherlands, Sweden and the UK. We display these identified changes in a concept map, suggesting links between the changes in behaviour and levels of carbon emissions. We consider these changes in relation to a comprehensive and easy to understand model of behaviour, the Opportunity, Motivation Behaviour (COM-B) model, to understand the capabilities, opportunities and behaviours related to the observed behaviour changes and potential policy to mitigate climate change. There is now an opportunity for policy-makers to increase the likelihood of maintaining pro-environmental behaviour changes by providing opportunities, improving capabilities and maintaining motivation for these behaviours.
DOCUMENT
AI-driven lifestyle monitoring systems collect data from ambient, motion, contact, light, and physiological sensors placed in the home, enabling AI algorithms to identify daily routines and detect deviations to support older adults "aging in place." Despite its potential to support several challenges in long-term care for older adults, implementation remains limited. This study explored the facilitators and barriers to implementing AIdriven lifestyle monitoring in long-term care for older adults, as perceived by formal and informal caregivers, as well as management, in both an adopting and non-adopting healthcare organization. A qualitative interview study using semi-structured interviews was conducted with 22 participants (5 informal caregivers, 10 formal caregivers, and 7 participants in a management position) from two long-term care organizations. Reflexive thematic analysis, guided by the nonadoption, abandonment, scale-up, spread, and sustainability (NASSS) framework, structured findings into facilitators and barriers. 12 facilitators and 16 barriers were identified, highlighting AI-driven lifestyle monitoring as a valuable, patient-centred, and unobtrusive tool enhancing care efficiency and caregiver reassurance. However, barriers such as privacy concerns, notification overload, training needs, and organizational alignment must be addressed. Contextual factors, including regulations, partnerships, and financial considerations, further influence implementation. This study showed that to optimize implementation of AI-driven lifestyle monitoring, organizations should address privacy concerns, provide training, engage in system (re)design and create a shared vision. A comprehensive multi-level approach across all levels is essential for successful AI integration in long-term care for older adults.
DOCUMENT