Lighting in video games is used to set moods and atmosphere, or can serve as a gameplay tool. This paper examines the effects lighting concepts can have on a virtual game environment on the players’ navigation within the game. Previously known lighting concepts were tested in a virtual environment to determine if they have a similar effect on the perception of the presented virtual space as they do in real life, as well as the effect they have on the navigational behavior of players. In a game-experiment with 50 male participants we show that the previously known lighting concepts apply to the virtual environment in a similar manner as they do in real life, although the effects on the navigational behavior of the participants remain inconclusive.
DOCUMENT
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems.
MULTIFILE
Lighting accounts for a significant amount of electrical energy consumption in office buildings, up to 45% of the total consumed. This energy consumption can be reduced by as much as 60% through an occupant-dependent lighting control strategy. With particular focus on open-plan offices, where the application of this strategy is more challenging to apply due to differences in individual occupancy patterns, this paper covers (1) to which extent individual occupancy-based lighting control has been tested, (2) developed, and (3) evaluated. Search terms were defined with use of three categories, namely ‘occupancy patterns’, ‘lighting control strategy’, and ‘office’. Relevant articles were selected by a structured search through key online scientific databases and journals. The 24 studies identified as eligible were evaluated on six criteria: (1) study characteristics, (2) office characteristics, (3) lighting system characteristics, (4) lighting control design, (5) post-occupancy evaluation, and (6) conclusions, and this was used to answer the research questions. It was concluded that the strategy has not been tested yet with field studies in open-plan offices, but that it needs further development before it can be applied in these type of offices. Although lighting currently tends to be controlled at workspace level, many aspects of the strategy can be further developed; there is potential to further increase energy savings on lighting within open-plan office spaces. Individual occupancy-based lighting control requires further validation, focussing on the factors influencing its energy savings, on its cost effectiveness, and on its acceptability for users.
DOCUMENT
In this project, the AGM R&D team developed and refined the use of a facial scanning rig. The rig is a physical device comprising multiple cameras and lighting that are mounted on scaffolding around a 'scanning volume'. This is an area at which objects are placed before being photographed from multiple angles. The object is typically a person's head, but it can be anything of this approximate size. Software compares the photographs to create a digital 3D recreation - this process is called photogrammetry. The 3D model is then processed by further pieces of software and eventually becomes a face that can be animated inside in Unreal Engine, which is a popular piece of game development software made by the company Epic. This project was funded by Epic's 'Megagrant' system, and the focus of the work is on streamlining and automating the processing pipeline, and on improving the quality of the resulting output. Additional work has been done on skin shaders (simulating the quality of real skin in a digital form) and the use of AI to re/create lifelike hair styles. The R&D work has produced significant savings in regards to the processing time and the quality of facial scans, has produced a system that has benefitted the educational offering of BUas, and has attracted collaborators from the commercial entertainment/simulation industries. This work complements and extends previous work done on the VIBE project, where the focus was on creating lifelike human avatars for the medical industry.