In this paper, we focus on how the qualitative vocabulary of Dynalearn, which is used for describing dynamic systems, corresponds to the mathematical equations used in quantitative modeling. Then, we demonstrate the translation of a qualitative model into a quantitative model, using the example of an object falling with air resistance.
Smart city technologies, including artificial intelligence and computer vision, promise to bring a higher quality of life and more efficiently managed cities. However, developers, designers, and professionals working in urban management have started to realize that implementing these technologies poses numerous ethical challenges. Policy papers now call for human and public values in tech development, ethics guidelines for trustworthy A.I., and cities for digital rights. In a democratic society, these technologies should be understandable for citizens (transparency) and open for scrutiny and critique (accountability). When implementing such public values in smart city technologies, professionals face numerous knowledge gaps. Public administrators find it difficult to translate abstract values like transparency into concrete specifications to design new services. In the private sector, developers and designers still lack a ‘design vocabulary’ and exemplary projects that can inspire them to respond to transparency and accountability demands. Finally, both the public and private sectors see a need to include the public in the development of smart city technologies but haven’t found the right methods. This proposal aims to help these professionals to develop an integrated, value-based and multi-stakeholder design approach for the ethical implementation of smart city technologies. It does so by setting up a research-through-design trajectory to develop a prototype for an ethical ‘scan car’, as a concrete and urgent example for the deployment of computer vision and algorithmic governance in public space. Three (practical) knowledge gaps will be addressed. With civil servants at municipalities, we will create methods enabling them to translate public values such as transparency into concrete specifications and evaluation criteria. With designers, we will explore methods and patterns to answer these value-based requirements. Finally, we will further develop methods to engage civil society in this processes.
The PhD research by Joris Weijdom studies the impact of collective embodied design techniques in collaborative mixed-reality environments (CMRE) in art- and engineering design practice and education. He aims to stimulate invention and innovation from an early stage of the collective design process.Joris combines theory and practice from the performing arts, human-computer interaction, and engineering to develop CMRE configurations, strategies for its creative implementation, and an embodied immersive learning pedagogy for students and professionals.This lecture was given at the Transmedia Arts seminar of the Mahindra Humanities Center of Harvard University. In this lecture, Joris Weijdom discusses critical concepts, such as embodiment, presence, and immersion, that concern mixed-reality design in the performing arts. He introduces examples from his practice and interdisciplinary projects of other artists.About the researchMultiple research areas now support the idea that embodiment is an underpinning of cognition, suggesting new discovery and learning approaches through full-body engagement with the virtual environment. Furthermore, improvisation and immediate reflection on the experience itself, common creative strategies in artist training and practice, are central when inventing something new. In this research, a new embodied design method, entitled Performative prototyping, has been developed to enable interdisciplinary collective design processes in CMRE’s and offers a vocabulary of multiple perspectives to reflect on its outcomes.Studies also find that engineering education values creativity in design processes, but often disregards the potential of full-body improvisation in generating and refining ideas. Conversely, artists lack the technical know-how to utilize mixed-reality technologies in their design process. This know-how from multiple disciplines is thus combined and explored in this research, connecting concepts and discourse from human-computer interaction and media- and performance studies.This research is a collaboration of the University of Twente, Utrecht University, and HKU University of the Arts Utrecht. This research is partly financed by the Dutch Research Council (NWO).Mixed-reality experiences merge real and virtual environments in which physical and digital spaces, objects, and actors co-exist and interact in real-time. Collaborative Mix-Reality Environments, or CMRE's, enable creative design- and learning processes through full-body interaction with spatial manifestations of mediated ideas and concepts, as live-puppeteered or automated real-time computer-generated content. It employs large-scale projection mapping techniques, motion-capture, augmented- and virtual reality technologies, and networked real-time 3D environments in various inter-connected configurations.This keynote was given at the IETM Plenary meeting in Amsterdam for more than 500 theatre and performing arts professionals. It addresses the following questions in a roller coaster ride of thought-provoking ideas and examples from the world of technology, media, and theatre:What do current developments like Mixed Reality, Transmedia, and The Internet of Things mean for telling stories and creating theatrical experiences? How do we design performances on multiple "stages" and relate to our audiences when they become co-creators?Contactjoris.weijdom@hku.nl / LinkedIn profileThis research is part of the professorship Performative Processes