Studying images in social media poses specific methodological challenges, which in turn have directed scholarly attention towards the computational interpretation of visual data. When analyzing large numbers of images, both traditional content analysis as well as cultural analytics have proven valuable. However, these techniques do not take into account the circulation and contextualization of images within a socio-technical environment. As the meaning of social media images is co-created by networked publics, bound through networked practices, these visuals should be analyzed on the level of their networked contextualization. Although machine vision is increasingly adept at recognizing faces and features, its performance in grasping the meaning of social media images is limited. However, combining automated analyses of images - broken down by their compositional elements - with repurposing platform data opens up the possibility to study images in the context of their resonance within and across online discursive spaces. This paper explores the capacities of platform data - hashtag modularity and retweet counts - to complement the automated assessment of social media images; doing justice to both the visual elements of an image and the contextual elements encoded by networked publics that co-create meaning.
MULTIFILE
Since the arrival of cinema, film theorists have studied how spectators perceive the representations that the medium offers to our senses. Early film theorists have bent their heads over what cinema is, how cinema can be seen as art, but also over what cinema is capable of. One of the earliest film theorists, Hugo Münsterberg argued in 1916 that the uniqueness of cinema, or as he calls it photoplay, lies in the way it offers the possibility to represent our mental perception and organisation of the reality, or the world we live in: “the photoplay tells us the human story by overcoming the forms of the outer world, namely, space, time, and causality, and by adjusting the events to the forms of the inner world, namely, attention, memory, imagination, and emotion” (Münsterberg [1916] 2004, 402)
LINK
In order to be successful in today’s competitive environment, brands must have well-established identities. Therefore, during the branding process it is necessary to attribute personality traits and visual elements that best represent the desired identity of the brand. With the recent advances in communication, scholars have analyzed how different visual elements (e.g., logo, typography, color) can visually represent the desired brand personality. However, these elements are typically analyzed separately, since few studies show the association of personality traits with the set of visual elements of the brand (the well-known “visual identity”). Therefore, this work aims to develop a methodological framework that allows the design of visual identity based on the Dimensions of Brand Personality, by assigning a set of visual elements (colors, typographies, and shapes) to each dimension (Sincerity, Excitement, Competence, Sophistication and Ruggedness) suggested by Aaker in 1997. Through a quanti-quali approach, the associations suggested in the proposed framework were duly tested through the application of a questionnaire to a sample of consumers, to gather information about their perceptions. Preliminary results suggest that the proposed framework can successfully generate the desired brand personality perception in consumers, according to the design elements used for the creation of the visual brand identity.
"Speak the Future" presents a novel test case at the intersection of scientific innovation and public engagement. Leveraging the power of real-time AI image generation, the project empowers festival participants to verbally describe their visions for a sustainable and regenerative future. These descriptions are instantly transformed into captivating imagery using SDXL Turbo, fostering collective engagement and tangible visualisation of abstract sustainability concepts. This unique interplay of speech recognition, AI, and projection technology breaks new ground in public engagement methods. The project offers valuable insights into public perceptions and aspirations for sustainability, as well as understanding the effectiveness of AI-powered visualisation and regenerative applications of AI. Ultimately, this will serve as a springboard for PhD research that will aim to understand How AI can serve as a vehicle for crafting regenerative futures? By employing real-time AI image generation, the project directly tests its effectiveness in fostering public engagement with sustainable futures. Analysing participant interaction and feedback sheds light on how AI-powered visualisation tools can enhance comprehension and engagement. Furthermore, the project fosters public understanding and appreciation of research. The interactive and accessible nature of "Speak the Future" demystifies the research process, showcasing its relevance and impact on everyday life. Moreover, by directly involving the public in co-creating visual representations of their aspirations, the project builds an emotional connection and sense of ownership, potentially leading to continued engagement and action beyond the festival setting. "Speak the Future" promises to be a groundbreaking initiative, bridging the gap between scientific innovation and public engagement in sustainability discourse. By harnessing the power of AI for collective visualisation, the project not only gathers valuable data for researchers but also empowers the public to envision and work towards a brighter, more sustainable future.