This chapter will focus on the deep evolutionary history of the cognitive capacities underlying linguistic iconicity. The complex capacity for linguistic iconicity has roots in a more general cross-modal ability present throughout the animal kingdom, cross-modal transfer. Cross-modal transfer is the ability to make basic inferences about sensory properties of an object in multiple modalities based on experience from only one. This situates iconicity as a fundamentally cross-modal phenomenon; part of a broader, uniquely human cross-modal cognitive suite which includes relatively rare phenomena like synesthesia, alongside more ubiquitous phenomena like sensory metaphor and cross-modal correspondences. Evidence suggests the evolutionarily deep capacity for cross-modal transfer was honed into more sophisticated capacities underlying iconicity by an evolutionary ratchet of increased prosociality during human self-domestication. This period provided strong selective pressures for increasingly complex cross-sensory communication, and eventually, the predominantly arbitrary symbolic systems that underpin modern human language. This is a peer-reviewed preprint of the work below.Cuskley, Christine and Kees Sommer (forthcoming). The evolution of linguistic iconicity and the cross-modal cognitive suite. To appear in Olga Fisher, Kimi Akita, and Pamela Perniss (eds.), Oxford Handbook of Iconicity in Language. Oxford University Press: Oxford, UK.
DOCUMENT
1. Purpose of the ResearchThe research aims at developing a concept of operations (ConOps) that could connect aviation and all existing and future transport modes into an overall efficient transport network. Such ConOps should provide future passengers with a rapid and seamless travel experience.2. Research design, Methodology or ApproachThis paper describes a ConOps based on an ATM (Air Traffic Management) for a holistic traffic management system. For this purpose, the influences of quality management systems and other organizational facilities on the quality of passenger travel were examined. Various management systems like resources, traffic information, energy, fleet emergency calls, security and infrastructure, and applications such as weather information platforms and tracking systems have been integrated.3. Expected research findingsThe ConOps is intended to pave the way to cross-modal traffic management, in which the preferences of the travellers have a high priority. The first results show that the needs of the passengers can only be met in advance, and the traffic resources can only be used economically through close cooperation and coordination of these management systems and applications with regard to possible synergies and interactions.4. Summary of the originality/contributionTo develop these ConOps, general and traffic management systems next to basic principles of quality management were researched in the literature, which could be summarized in a Total Traffic Management System (TTM). The ATM experience served as a model example. The ConOps can be used as a basis to build a previously non-existing TTM that can be used to manage the future of travelling and future transport modes.
MULTIFILE
Three-dimensional (3D) reconstruction has become a fundamental technology in applications ranging from cultural heritage preservation and robotics to forensics and virtual reality. As these applications grow in complexity and realism, the quality of the reconstructed models becomes increasingly critical. Among the many factors that influence reconstruction accuracy, the lighting conditions at capture time remain one of the most influential, yet widely neglected, variables. This review provides a comprehensive survey of classical and modern 3D reconstruction techniques, including Structure from Motion (SfM), Multi-View Stereo (MVS), Photometric Stereo, and recent neural rendering approaches such as Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS), while critically evaluating their performance under varying illumination conditions. We describe how lighting-induced artifacts such as shadows, reflections, and exposure imbalances compromise the reconstruction quality and how different approaches attempt to mitigate these effects. Furthermore, we uncover fundamental gaps in current research, including the lack of standardized lighting-aware benchmarks and the limited robustness of state-of-the-art algorithms in uncontrolled environments. By synthesizing knowledge across fields, this review aims to gain a deeper understanding of the interplay between lighting and reconstruction and provides research directions for the future that emphasize the need for adaptive, lighting-robust solutions in 3D vision systems.
MULTIFILE