Digital surveillance technologies using artificial intelligence (AI) tools such as computer vision and facial recognition are becoming cheaper and easier to integrate into governance practices worldwide. Morocco serves as an example of how such technologies are becoming key tools of governance in authoritarian contexts. Based on qualitative fieldwork including semi-structured interviews, observation, and extensive desk reviews, this chapter focusses on the role played by AI-enhanced technology in urban surveillance and the control of migration between the Moroccan–Spanish borders. Two cross-cutting issues emerge: first, while international donors provide funding for urban and border surveillance projects, their role in enforcing transparency mechanisms in their implementation remains limited; second, Morocco’s existing legal framework hinders any kind of public oversight. Video surveillance is treated as the sole prerogative of the security apparatus, and so far public actors have avoided to engage directly with the topic. The lack of institutional oversight and public debate on the matter raise serious concerns on the extent to which the deployment of such technologies affects citizens’ rights. AI-enhanced surveillance is thus an intrinsically transnational challenge in which private interests of economic gain and public interests of national security collide with citizens’ human rights across the Global North/Global South divide.
MULTIFILE
We examined the neural correlates of facial attractiveness by presenting pictures of male or female faces (neutral expression) with low/intermediate/high attractiveness to 48 male or female participants while recording their electroencephalogram (EEG). Subjective attractiveness ratings were used to determine the 10% highest, 10% middlemost, and 10% lowest rated faces for each individual participant to allow for high contrast comparisons. These were then split into preferred and dispreferred gender categories. ERP components P1, N1, P2, N2, early posterior negativity (EPN), P300 and late positive potential (LPP) (up until 3000 ms post-stimulus), and the face specific N170 were analysed. A salience effect (attractive/unattractive > intermediate) in an early LPP interval (450–850 ms) and a long-lasting valence related effect (attractive > unattractive) in a late LPP interval (1000–3000 ms) were elicited by the preferred gender faces but not by the dispreferred gender faces. Multi-variate pattern analysis (MVPA)-classifications on whole-brain single-trial EEG patterns further confirmed these salience and valence effects. It is concluded that, facial attractiveness elicits neural responses that are indicative of valenced experiences, but only if these faces are considered relevant. These experiences take time to develop and last well beyond the interval that is commonly explored.
MULTIFILE
In this project, the AGM R&D team developed and refined the use of a facial scanning rig. The rig is a physical device comprising multiple cameras and lighting that are mounted on scaffolding around a 'scanning volume'. This is an area at which objects are placed before being photographed from multiple angles. The object is typically a person's head, but it can be anything of this approximate size. Software compares the photographs to create a digital 3D recreation - this process is called photogrammetry. The 3D model is then processed by further pieces of software and eventually becomes a face that can be animated inside in Unreal Engine, which is a popular piece of game development software made by the company Epic. This project was funded by Epic's 'Megagrant' system, and the focus of the work is on streamlining and automating the processing pipeline, and on improving the quality of the resulting output. Additional work has been done on skin shaders (simulating the quality of real skin in a digital form) and the use of AI to re/create lifelike hair styles. The R&D work has produced significant savings in regards to the processing time and the quality of facial scans, has produced a system that has benefitted the educational offering of BUas, and has attracted collaborators from the commercial entertainment/simulation industries. This work complements and extends previous work done on the VIBE project, where the focus was on creating lifelike human avatars for the medical industry.