The concept of immersion has been widely used for the design and evaluation of user experiences. Augmented, virtual and mixed-reality environments have further sparked the discussion of immersive user experiences and underlying requirements. However, a clear definition and agreement on design criteria of immersive experiences remains debated, creating challenges to advancing our understanding of immersive experiences and how these can be designed. Based on a multidisciplinary Delphi approach, this study provides a uniform definition of immersive experiences and identifies key criteria for the design and staging thereof. Thematic analysis revealed five key themes – transition into/out of the environment, in-experience user control, environment design, user context relatedness, and user openness and motivation, that emphasise the coherency in the user-environment interaction in the immersive experience. The study proposes an immersive experience framework as a guideline for industry practitioners, outlining key design criteria for four distinct facilitators of immersive experiences–systems, spatial, empathic/social, and narrative/sequential immersion. Further research is proposed using the immersive experience framework to investigate the hierarchy of user senses to optimise experiences that blend physical and digital environments and to study triggered, desired and undesired effects on user attitude and behaviour.
MULTIFILE
Stakeholders must purposely reflect on the suitability of process models for designing tourism experience systems. Specific characteristics of these models relate to developing tourism experience systems as integral parts of wider socio-technical systems. Choices made in crafting such models need to address three reflexivity mechanisms: problem, stakeholder and method definition. We systematically evaluate application of these mechanisms in a living lab experiment, by developing evaluation episodes using the framework for evaluation in design science research. We outline (i) the development of these evaluation episodes and (ii) how executing them influenced the process and outcomes of co-crafting the process model. We highlight both the benefits of and an approach to incorporate reflexivity in developing process models for designing tourism experience systems.
MULTIFILE
Structured experience (SE) providers continuously evaluate and improve their experiential offerings to make them more memorable. Arguably, the temporal dynamics of the emotions in an experience have a crucial influence on its memorability. Traditional post-experience evaluation procedures tend to ignore these temporal dynamics, thus offering imprecise feedback for providers on exactly when and where to optimize their experiential offerings. In this paper, we use two methods as a tool for evaluating how closely the lived experience of a SE follows the experience as intended by the provider: real-time skin conductance (SC) and experience reconstruction measures (ERMs). We demonstrate that both SC and ERMs are significantly related to intended experience. This link was found to be stronger for later sections of the experience than for earlier sections. In addition, SC and ERMs appear to be useful tools to assess the effectiveness of design interventions, thus providing valuable feedback for SE providers.
DOCUMENT
In the past two years [2010-2012] we have done research on the visitor experience of music festivals. We conducted several surveys asking festival visitors for demographic variables, taste in music, their motivation for visiting festivals, mentalities and the evaluation of the festival. We also asked for the use of social media before, after and during the festival. Results show that visitors using social media have a significantly different festival experience from users that do not use social media before, during or after the festival. Results on difference in festival satisfaction are mixed.
DOCUMENT
Introduction: In March 2014, the New South Wales (NSW) Government (Australia) announced the NSW Integrated Care Strategy. In response, a family-centred, population-based, integrated care initiative for vulnerable families and their children in Sydney, Australia was developed. The initiative was called Healthy Homes and Neighbourhoods. A realist translational social epidemiology programme of research and collaborative design is at the foundation of its evaluation. Theory and Method: The UK Medical Research Council (MRC) Framework for evaluating complex health interventions was adapted. This has four components, namely 1) development, 2) feasibility/piloting, 3) evaluation and 4) implementation. We adapted the Framework to include: critical realist, theory driven, and continuous improvement approaches. The modified Framework underpins this research and evaluation protocol for Healthy Homes and Neighbourhoods. Discussion: The NSW Health Monitoring and Evaluation Framework did not make provisions for assessment of the programme layers of context, or the effect of programme mechanism at each level. We therefore developed a multilevel approach that uses mixed-method research to examine not only outcomes, but also what is working for whom and why.
LINK
The goal for the coming years is to get insight in the guest experience in hotels. What is guest experience? How to measure guest experience? What is the relation between guest experience and guest loyalty? And finally, what tangible elements in the physical environment of hotels and the contact with hotel employees may improve the experience of hotel guests? And in what way should these elements be changed? This paper describes the first and second step towards this goal: a theoretical background of guest experience and the development of the Guest Experience Scan for NH Hoteles. This Guest Experience Scan is a quantitative instrument trying to measure guests’ affective evaluation of the physical environment of the hotel and the contact with the hotel employees.
MULTIFILE
DOCUMENT
DOCUMENT
User experience (UX) research on pervasive technologies faces considerable challenges regarding today's mobile context-sensitive applications: evaluative field studies lack control, whereas lab studies miss the interaction with a dynamic context. This dilemma has inspired researchers to use virtual environments (VEs) to acquire control while offering the user a rich contextual experience. Although promising, these studies are mainly concerned with usability and the technical realization of their setup. Furthermore, previous setups leave room for improvement regarding the user's immersive experience. This paper contributes to this line of research by presenting a UX case study on mobile advertising with a novel CAVE-smartphone interface. We conducted two experiments in which we evaluated the intrusiveness of a mobile locationbased advertising app in a virtual supermarket. The results confirm our hypothesis that context-congruent ads lessen the experienced intrusiveness thereby demonstrating that our setup is capable of generating preliminary meaningful results with regards to UX. Furthermore, we share insights in conducting these studies.
LINK
Explainable Artificial Intelligence (XAI) aims to provide insights into the inner workings and the outputs of AI systems. Recently, there’s been growing recognition that explainability is inherently human-centric, tied to how people perceive explanations. Despite this, there is no consensus in the research community on whether user evaluation is crucial in XAI, and if so, what exactly needs to be evaluated and how. This systematic literature review addresses this gap by providing a detailed overview of the current state of affairs in human-centered XAI evaluation. We reviewed 73 papers across various domains where XAI was evaluated with users. These studies assessed what makes an explanation “good” from a user’s perspective, i.e., what makes an explanation meaningful to a user of an AI system. We identified 30 components of meaningful explanations that were evaluated in the reviewed papers and categorized them into a taxonomy of human-centered XAI evaluation, based on: (a) the contextualized quality of the explanation, (b) the contribution of the explanation to human-AI interaction, and (c) the contribution of the explanation to human- AI performance. Our analysis also revealed a lack of standardization in the methodologies applied in XAI user studies, with only 19 of the 73 papers applying an evaluation framework used by at least one other study in the sample. These inconsistencies hinder cross-study comparisons and broader insights. Our findings contribute to understanding what makes explanations meaningful to users and how to measure this, guiding the XAI community toward a more unified approach in human-centered explainability.
MULTIFILE