The paper arguments that a design approach will be essential to the future of e-democracy and e-governance. This development is driven at the intersection of three fields: democracy, information technology and design. Developments in these fields will result in a new scale, new complexity and demands for new quality of democracy solutions. Design is essential to answer these new challenges. The article identifies a new generation of design thinking as a distinct new voice in the development of e-democracy and describes some of the consequences for democracy and governance. It argues that, to be able to design new solutions for e-democracy successfully, current approaches may be too narrow and a broader critical reflection is necessary for both designers and other stakeholders in the process.
DOCUMENT
The relevance of citizen participation in regeneration projects, particularly in shrinking cities, is widely acknowledged, and this topic has received a great deal of policy and academic attention. Although the many advantages of citizen participation in regeneration projects have been identified, its current forms have also received considerable criticism. In short, this criticism boils down to the conclusion that the ideal of citizen participation is not put into practice. This paper considers why this is the case, asking whether current participatory practices enable citizens to exercise influence as political actors in urban regeneration projects. In this paper, we examine this question based on Mouffe’s conception of the political, coupled with findings from our empirical research conducted in Heerlen North, The Netherlands. We conducted qualitative research on urban regeneration in the shrinking old industrial city of Heerlen. The findings reveal two distinct perspectives on citizen participation. Professionals see the existing context of citizen participation as a reasonable and practical but, in some respects, insufficient practice. Citizens’ views on participation are organized around feelings of anger, shame, and fear and are grounded in experiences of a lack of recognition. These experiences limit citizens’ abilities to exert true influence on regeneration projects. We conclude that efforts to regenerate shrinking cities should strive to recognize these experiences so as to create conditions that generate respect and esteem and, as such, enable urban social justice.
LINK
Moral food lab: Transforming the food system with crowd-sourced ethics
LINK
What options are open for peoplecitizens, politicians, and other nonscientiststo become actively involved in and anticipate new directions in the life sciences? In addressing this question, this article focuses on the start of the Human Genome Project (1985-1990). By contrasting various models of democracy (liberal, republican, deliberative), I examine the democratic potential the models provide for citizens' involvement in setting priorities and funding patterns related to big science projects. To enhance the democratizing of big science projects and give citizens opportunities to reflect, anticipate, and negotiate on newdirections in science and technology at a global level, liberal democracy with its national scope and representative structure does not suffice. Although republican (communicative) and deliberative (associative) democracy models meet the need for greater citizen involvement, the ways to achieve the ideal at a global level still remain to be developed.
DOCUMENT
Preprint submitted to Information Processing & Management Tags are a convenient way to label resources on the web. An interesting question is whether one can determine the semantic meaning of tags in the absence of some predefined formal structure like a thesaurus. Many authors have used the usage data for tags to find their emergent semantics. Here, we argue that the semantics of tags can be captured by comparing the contexts in which tags appear. We give an approach to operationalizing this idea by defining what we call paradigmatic similarity: computing co-occurrence distributions of tags with tags in the same context, and comparing tags using information theoretic similarity measures of these distributions, mostly the Jensen-Shannon divergence. In experiments with three different tagged data collections we study its behavior and compare it to other distance measures. For some tasks, like terminology mapping or clustering, the paradigmatic similarity seems to give better results than similarity measures based on the co-occurrence of the documents or other resources that the tags are associated to. We argue that paradigmatic similarity, is superior to other distance measures, if agreement on topics (as opposed to style, register or language etc.), is the most important criterion, and the main differences between the tagged elements in the data set correspond to different topics
DOCUMENT
Both because of the shortcomings of existing risk assessment methodologies, as well as newly available tools to predict hazard and risk with machine learning approaches, there has been an emerging emphasis on probabilistic risk assessment. Increasingly sophisticated AI models can be applied to a plethora of exposure and hazard data to obtain not only predictions for particular endpoints but also to estimate the uncertainty of the risk assessment outcome. This provides the basis for a shift from deterministic to more probabilistic approaches but comes at the cost of an increased complexity of the process as it requires more resources and human expertise. There are still challenges to overcome before a probabilistic paradigm is fully embraced by regulators. Based on an earlier white paper (Maertens et al., 2022), a workshop discussed the prospects, challenges and path forward for implementing such AI-based probabilistic hazard assessment. Moving forward, we will see the transition from categorized into probabilistic and dose-dependent hazard outcomes, the application of internal thresholds of toxicological concern for data-poor substances, the acknowledgement of user-friendly open-source software, a rise in the expertise of toxicologists required to understand and interpret artificial intelligence models, and the honest communication of uncertainty in risk assessment to the public.
DOCUMENT