The relevance of citizen participation in regeneration projects, particularly in shrinking cities, is widely acknowledged, and this topic has received a great deal of policy and academic attention. Although the many advantages of citizen participation in regeneration projects have been identified, its current forms have also received considerable criticism. In short, this criticism boils down to the conclusion that the ideal of citizen participation is not put into practice. This paper considers why this is the case, asking whether current participatory practices enable citizens to exercise influence as political actors in urban regeneration projects. In this paper, we examine this question based on Mouffe’s conception of the political, coupled with findings from our empirical research conducted in Heerlen North, The Netherlands. We conducted qualitative research on urban regeneration in the shrinking old industrial city of Heerlen. The findings reveal two distinct perspectives on citizen participation. Professionals see the existing context of citizen participation as a reasonable and practical but, in some respects, insufficient practice. Citizens’ views on participation are organized around feelings of anger, shame, and fear and are grounded in experiences of a lack of recognition. These experiences limit citizens’ abilities to exert true influence on regeneration projects. We conclude that efforts to regenerate shrinking cities should strive to recognize these experiences so as to create conditions that generate respect and esteem and, as such, enable urban social justice.
LINK
The paper arguments that a design approach will be essential to the future of e-democracy and e-governance. This development is driven at the intersection of three fields: democracy, information technology and design. Developments in these fields will result in a new scale, new complexity and demands for new quality of democracy solutions. Design is essential to answer these new challenges. The article identifies a new generation of design thinking as a distinct new voice in the development of e-democracy and describes some of the consequences for democracy and governance. It argues that, to be able to design new solutions for e-democracy successfully, current approaches may be too narrow and a broader critical reflection is necessary for both designers and other stakeholders in the process.
DOCUMENT
Urban experimentation has gained traction with (supra-)national and local politics as a method for catalyzing change in urban systems and practices. Yet, with experiments becoming more commonly driven by established actors, concerns persist about their potential to sidestep political issues of power, exclusion and conflict fundamental to societal change. This paper seeks to unpack what exactly is at stake when the political is ignored or neutralized during an urban experiment. Using theories on the political as an analytical lens, the paper presents a case study of an urban experiment in Amsterdam, dissecting the ways in which (de)politicization operates in the experiment. The findings demonstrate that ignoring the political in urban experimentation risks excluding certain voices and options from being considered, which ultimately leads to stagnation. The paper concludes by outlining future challenges for research and practice that addresses (de)politicization in urban experiments.
MULTIFILE
Moral food lab: Transforming the food system with crowd-sourced ethics
LINK
What options are open for peoplecitizens, politicians, and other nonscientiststo become actively involved in and anticipate new directions in the life sciences? In addressing this question, this article focuses on the start of the Human Genome Project (1985-1990). By contrasting various models of democracy (liberal, republican, deliberative), I examine the democratic potential the models provide for citizens' involvement in setting priorities and funding patterns related to big science projects. To enhance the democratizing of big science projects and give citizens opportunities to reflect, anticipate, and negotiate on newdirections in science and technology at a global level, liberal democracy with its national scope and representative structure does not suffice. Although republican (communicative) and deliberative (associative) democracy models meet the need for greater citizen involvement, the ways to achieve the ideal at a global level still remain to be developed.
DOCUMENT
Several studies have suggested that precision livestock farming (PLF) is a useful tool foranimal welfare management and assessment. Location, posture and movement of an individual are key elements in identifying the animal and recording its behaviour. Currently, multiple technologies are available for automated monitoring of the location of individual animals, ranging from Global Navigation Satellite Systems (GNSS) to ultra-wideband (UWB), RFID, wireless sensor networks (WSN) and even computer vision. These techniques and developments all yield potential to manage and assess animal welfare, but also have their constraints, such as range and accuracy. Combining sensors such as accelerometers with any location determining technique into a sensor fusion systemcan give more detailed information on the individual cow, achieving an even more reliable and accurate indication of animal welfare. We conclude that location systems are a promising approach to determining animal welfare, especially when applied in conjunction with additional sensors, but additional research focused on the use of technology in animal welfare monitoring is needed.
DOCUMENT
Preprint submitted to Information Processing & Management Tags are a convenient way to label resources on the web. An interesting question is whether one can determine the semantic meaning of tags in the absence of some predefined formal structure like a thesaurus. Many authors have used the usage data for tags to find their emergent semantics. Here, we argue that the semantics of tags can be captured by comparing the contexts in which tags appear. We give an approach to operationalizing this idea by defining what we call paradigmatic similarity: computing co-occurrence distributions of tags with tags in the same context, and comparing tags using information theoretic similarity measures of these distributions, mostly the Jensen-Shannon divergence. In experiments with three different tagged data collections we study its behavior and compare it to other distance measures. For some tasks, like terminology mapping or clustering, the paradigmatic similarity seems to give better results than similarity measures based on the co-occurrence of the documents or other resources that the tags are associated to. We argue that paradigmatic similarity, is superior to other distance measures, if agreement on topics (as opposed to style, register or language etc.), is the most important criterion, and the main differences between the tagged elements in the data set correspond to different topics
DOCUMENT
Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
DOCUMENT