ABSTRACT Purpose: This short paper describes the dashboard design process for online hate speech monitoring for multiple languages and platforms. Methodology/approach: A case study approach was adopted in which the authors followed a research & development project for a multilingual and multiplatform online dashboard monitoring online hate speech. The case under study is the project for the European Observatory of Online Hate (EOOH). Results: We outline the process taken for design and prototype development for which a design thinking approach was followed, including multiple potential user groups of the dashboard. The paper presents this process's outcome and the dashboard's initial use. The identified issues, such as obfuscation of the context or identity of user accounts of social media posts limiting the dashboard's usability while providing a trade-off in privacy protection, may contribute to the discourse on privacy and data protection in (big data) social media analysis for practitioners. Research limitations/implications: The results are from a single case study. Still, they may be relevant for other online hate speech detection and monitoring projects involving big data analysis and human annotation. Practical implications: The study emphasises the need to involve diverse user groups and a multidisciplinary team in developing a dashboard for online hate speech. The context in which potential online hate is disseminated and the network of accounts distributing or interacting with that hate speech seems relevant for analysis by a part of the user groups of the dashboard. International Information Management Association
LINK
De hoge percentages van het aantal mensen dat te maken krijgt met online haat of bedreigingen, liegen er namelijk niet om. De sociale norm dat online discriminatie onacceptabel en niet oké is, lijkt nauwelijks aanwezig. Van augustus 2019 tot en met augustus 2022 heeft het project #DatMeenJeNiet van Movisie, Hogeschool Inholland en Diversity Media geprobeerd om een steentje bij te dragen aan het veranderen van deze norm. Dit is het eindrapport van de onderzoeken die zijn uitgevoerd binnen dit project.
DOCUMENT
In this article, the main question is whether and, if so, to what extent online journalism raises new moral issues and, if any, what kind of answers are preferable. Or do questions merely appear new, since they are really old ones in an electronic wrapping, old wine in new bottles? And how does journalism deal with the moral aspects of online journalism? The phenomenon of the Internet emerged in our society a few years ago. Since then, a large number of Dutch people have gone online, and the World Wide Web is now an integral part of our range of means of communication. Dutch journalism is online too, although certainly not in the lead. More and more journalists use the Internet as a source, especially for background information. Newspapers have their web sites, where the online version of the printed paper can be read. And that is it for the time being. There are no more far-reaching developments at present, certainly not on a large scale. Real online journalism is rather scarce in the Netherlands. The debate concerning the moral aspects of online journalism is mainly being conducted in the United States. First of all, by way of introduction, I will present an outline of online journalism. The first instance is the online version of the newspaper. Here, only to a certain degree new issues come up for discussion, since the reputation of reliability and accuracy of the papers, in spite of all criticism, also applies to their online versions. Besides, especially in the United States and increasingly in European countries as well, there is the so-called dotcom journalism, the e-zines, the online news sites without any relationship with printed newspapers. This may be the reason why these sites do not have a strong commitment to moral standards, at least as they have developed in the journalistic culture of the newspapers. After having outlined the moral issues arising in online journalism, the question will be addressed whether and, if so, to what extent it is meaningful and desirable to develop instruments of self-regulation for this new phenomenon of journalism.
DOCUMENT
Content moderation is commonly used by social media platforms to curb the spread of hateful content. Yet, little is known about how users perceive this practice and which factors may influence their perceptions. Publicly denouncing content moderation—for example, portraying it as a limitation to free speech or as a form of political targeting—may play an important role in this context. Evaluations of moderation may also depend on interpersonal mechanisms triggered by perceived user characteristics. In this study, we disentangle these different factors by examining how the gender, perceived similarity, and social influence of a user publicly complaining about a content-removal decision influence evaluations of moderation. In an experiment (n = 1,586) conducted in the United States, the Netherlands, and Portugal, participants witnessed the moderation of a hateful post, followed by a publicly posted complaint about moderation by the affected user. Evaluations of the fairness, legitimacy, and bias of the moderation decision were measured, as well as perceived similarity and social influence as mediators. The results indicate that arguments about freedom of speech significantly lower the perceived fairness of content moderation. Factors such as social influence of the moderated user impacted outcomes differently depending on the moderated user’s gender. We discuss implications of these findings for content-moderation practices.
DOCUMENT
This study explores how TikTok Live’s fusion of immediacy, interactivity, and monetization creates a powerful infrastructure for political communication, one increasingly exploited for extremist mobilisation and disinformation. Focusing on far-right actors in Germany, it combines technical monitoring, content analysis, and policy review to examine how extremist networks exploit the platform’s live-streaming affordances to spread propaganda, monetize hate, and evade moderation, often in ways that outpace both TikTok’s self-regulation and external oversight under the EU’s Digital Services Act (DSA).
MULTIFILE
Este artículo busca señales de discursos de odio aparecidos en comentarios sobre el procés y el independentismo catalán publicados en las noticias en prensa sobre Lionel Messi en Madrid y en Barcelona (Abc, La Vanguardia, Mundo Deportivo, Marca, El mundo y As) en el periodo 2019-2021. Partiendo de 2.639 noticias con Messi en el titular, se usaron métodos cuantitativos para identificar los hilos con comentarios políticos para estudiarlos después a través de análisis cualitativo del discurso. Los resultados reflejan que en Madrid se usan noticias y comentarios sobre Messi para hablar del procés y del independentismo, mientras que en Barcelona (tanto medios como comentaristas) no relacionan al jugador con la política. Dos periódicos de Barcelona (La Vanguardia, Mundo Deportivo) y dos de Madrid (Marca y El mundo) reúnen los 12 hilos con más comentarios políticos en ambas ciudades: 487, en total. Su análisis revela que la prensa deportiva concita opiniones más diversas que la generalista y, por tanto, recoge más conflicto y más presencia de discursos de odio por ideología política. Los primeros mensajes (1-25) en los hilos de comentarios aparecen un 77% de las veces como los más seguidos y, por lo tanto, los usuarios que escriben primero influyen más. Esta investigación concluye que los discursos de odio se detectan más en estructuras y argumentaciones que en palabras concretas, pero su existencia no tiene por qué ser negativa y puede derivar en un efecto búmeran contra el propio mensaje de odio si aparece derrotado en la dinámica de intervenciones.This paper examines hate speech traces within comments about the Catalan independentist procés embedded in news published about Lionel Messi in Madrid’s and Barcelona’s online newspapers (Abc, La Vanguardia, Mundo Deportivo, Marca, El mundo, As) during the period 2019-2021. Starting from 2,639 news with Messi in the title, quantitative techniques were applied to identify those with the highest volume of political terms, and their comments’ threads were later studied in depth by means of qualitative discourse analysis. The results show that in Madrid news and comments about Messi are leveraged to discuss the procés, while in Barcelona both press and commenters refrain from tying politics to the footballer. Two newspapers from Barcelona (La Vanguardia, Mundo Deportivo) and two from Madrid (Marca, El mundo) gather the 12 threads with the highest prevalence of political comments: 487 in total. Their analysis reveals that opinions in sports newspapers are more diverse than in the general press and, consequently, show more conflict and more hate messages linked to opposing political views. The few first (1-25) of the threads’ comments turn out to be the most followed 77% of the times, making the users who comment first become more influential than the latecomers. This research concludes that hate speech appears more in structures and argumentations than in specific words, but their presence isn’t necessarily negative and can create a boomerang effect against the hate message if this becomes defeated during the subsequent online dispute.
DOCUMENT
The unexpected death of a child is one of the most challenging losses as it fractures survivors’ sense of parenthood and other layers of identity. Given that not all the bereaved parents who have need for support respond well to available treatments and that many have little access to further intervention or follow-up over time, online interventions featuring therapeutic writing and peer support have strong potential. In this article we explore how a group of bereaved mothers experienced the process of participating in an online course in therapeutic writing for the integration of grief. Our research questions were: How do parents who have lost a child experience being part of an online course in therapeutic writing? What are the perceived benefits and challenges of writing in processing their grief? We followed an existential phenomenological approach and analyzed fieldwork notes (n = 13), qualitative data from the application and assessment surveys (n = 35; n = 21), excerpts from the journals of some participants (n = 3), and email correspondence with some participants (n = 5). We categorized the results in three meaning units: (1) where does my story begin? The “both and” of their silent chaos; (2) standing on the middle line: a pregnancy that does not end; (3) closures and openings: “careful optimism” and the need for community support. Participants experienced writing as an opportunity for self-exploration regarding their identities and their emotional world, as well as a means to develop and strengthen a bond with their children. They also experienced a sense of belonging, validation, and acceptance in the online group in a way that helped them make sense of their suffering. Online writing courses could be of benefit for bereaved parents who are grieving the unexpected death of a child, but do not replace other interventions such as psychotherapy. In addition to trauma and attachment informed models of grief, identity informed models with a developmental focus might enhance the impact of both low-threshold community interventions and more intensive clinical ones. Further studies and theoretical development in the area are needed, addressing dialogical notions such as the multivoicedness of the self. Lehmann OV, Neimeyer RA, Thimm J, Hjeltnes A, Lengelle R and Kalstad TG (2022) Experiences of Norwegian Mothers Attending an Online Course of Therapeutic Writing Following the Unexpected Death of a Child. Front. Psychol. 12:809848. doi: 10.3389/fpsyg.2021.809848
DOCUMENT
Social media platforms such as Facebook, YouTube, and Twitter have millions of users logging in every day, using these platforms for commu nication, entertainment, and news consumption. These platforms adopt rules that determine how users communicate and thereby limit and shape public discourse.2 Platforms need to deal with large amounts of data generated every day. For example, as of October 2021, 4.55 billion social media users were ac tive on an average number of 6.7 platforms used each month per internet user.3 As a result, platforms were compelled to develop governance models and content moderation systems to deal with harmful and undesirable content, including disinformation. In this study: • ‘Content governance’ is defined as a set of processes, procedures, and systems that determine how a given platform plans, publishes, moder ates, and curates content. • ‘Content moderation’ is the organised practice of a social media plat form of pre-screening, removing, or labelling undesirable content to reduce the damage that inappropriate content can cause.
MULTIFILE
What you don’t know can’t hurt you: this seems to be the current approach for responding to disinformation by public regulators across the world. Nobody is able to say with any degree of certainty what is actually going on. This is in no small part because, at present, public regulators don’t have the slightest idea how disinformation actually works in practice. We believe that there are very good reasons for the current state of affairs, which stem from a lack of verifiable data available to public institutions. If an election board or a media regulator wants to know what types of digital content are being shared in their jurisdiction, they have no effective mechanisms for finding this data or ensuring its veracity. While there are many other reasons why governments would want access to this kind of data, the phenomenon of disinformation provides a particularly salient example of the consequences of a lack of access to this data for ensuring free and fair elections and informed democratic participation. This chapter will provide an overview of the main aspects of the problems associated with basing public regulatory decisions on unverified data, before sketching out some ideas of what a solution might look like. In order to do this, the chapter develops the concept of auditing intermediaries. After discussing which problems the concept of auditing intermediaries is designed to solve, it then discusses some of the main challenges associated with access to data, potential misuse of intermediaries, and the general lack of standards for the provision of data by large online platforms. In conclusion, the chapter suggests that there is an urgent need for an auditing mechanism to ensure the accuracy of transparency data provided by large online platform providers about the content on their services. Transparency data that have been audited would be considered verified data in this context. Without such a transparency verification mechanism, existing public debate is based merely on a whim, and digital dominance is likely to only become more pronounced.
MULTIFILE
Leerlingen groeien op in een wereld die permanent online is. Ze hebben toegang tot een grote hoeveelheid informatie en ze zijn constant online in interactie. Het onderwijs kan leerlingen opleiden tot mediawijze burgers.
LINK