Are we defenseless against AI, deepfake, and the rapid spread of disinformation? Join the first episode in our podcast series part of the Jean Monnet Chair EU-ACT DIGITAL, an initiative spotlighting EU digital policy. Our very first guest, Member of European Parliament Bart Groothuis (part of the Committee on Industry, Research and Energy), provides his expert insights on the state of digitalisation in the EU while being interviewed by European Impact’s Paul Schuchhard and European Studies students Francisco van Ruijven and Joana Pereira Grilo from The Hague University of Applied Sciences. Visit https://eu-act.digital/ to find out more about the Jean Monnet Chair EU-ACT DIGITAL.
MULTIFILE
SMILES is an international project where six organisations from three countries are collaborating to develop and test innovative approaches to combating the spread of fake news. The project is led by KB, the National Library of the Netherlands and the other partners are the Institute of Sound and Vision and The Hague University of Applied Sciences from the Netherlands, Fundación Goteo/Platoniq from Spain, and Public Libraries 2030 and the Media and Learning Association both of whom will focus on the situation in Belgium. In this article I would like to report on the main results from the baseline study that was carried out in recent months regarding the spread of disinformation in Belgium, the Netherlands and Spain, and existing measures and interventions that have taken place in those countries in combating such disinformation. Detailed reports for each country can be found on the project website.
MULTIFILE
Despite increased attention since 2015, there is little consensus on why audiences believe or share disinformation. In our study, we propose a shift in analytical perspective by applying the concept of resilience. Through a systematic literature review (n = 95), we identify factors that have been linked to individuals’ resilience and vulnerability to disinformation thus far. Our analysis reveals twelve factors: thinking styles, political ideology, worldview and beliefs, pathologies, knowledge, emotions, (social) media use, demographics, perceived control, trust, culture, and environment. By applying the results to the socio-ecological model (SEM), we provide a comprehensive view on what constitutes resilience to disinformation, delineate between different levels of influence, and identify relevant gaps in research. Our conceptualization contributes to an under-theorized field, in which the term resilience is much used yet rarely sufficiently defined.
DOCUMENT
While there is extensive research on how Russian interference – in particular Russian disinformation operation – has played out in different European countries, indications of Russian interference directly targeting EU, its institutions or policies received little attention. This paper argues why there is good reason to assume that the EU, its institutions and its policies are an ideal a target for authoritarian regimes to exploit. It then explores in what ways, if any, Russian disinformation campaigning targeted EU institutions and their policies during the political and electoral campaigns leading up to the European Parliament (EP) elections of May 2019. In this context disinformation campaigning in terms of both network flows and contents (‘narratives’) have been examined, on the basis of a review of various reports identifying Russian interference and disinformation and of analyses of overall disinformation flows in Europe and the use of a database monitoring occurrences of disinformation.
MULTIFILE
How do we promote media literacy? How do we combat disinformation in an icnreasingly challenging environment? Join the second episode in our podcast series part of the Jean Monnet Chair EU-ACT DIGITAL, an initiative spotlighting EU digital policy. In the second episode, Patricia van Rijswijk and Julia Conemans from Beeld & Geluid (Sound & Vision) talk about media literacy, digital resilience, and the activities of Beeld & Geluid on media and disinformation while being interviewed by European Impact’s Theo Zijderveld and European Impact's Mihai Postelnicu from The Hague University of Applied Sciences. Visit https://eu-act.digital/ to find out more about the Jean Monnet Chair EU-ACT DIGITAL.
MULTIFILE
In Intellectual Output 1 of the SMILES project, researchers from Belgium (Flanders), Netherlands and Spain conducted desk research to describe the current developments for each country around disinformation, particularly those related to the Covid-19 pandemic. In part 2 of the research, they identified training initiatives, courses and media literacy training tools for each country that are specifically focused on the combat against or promotion of resistance to existing disinformation. Each identified activity or tool was characterised by a fixed set of characteristics (appendix 1). In the second stage of this research, some experts for each country were interviewed. Among other things, they were asked for recommendations and tips for interventions that will be developed in Intellectual Output 2 of the SMILES project. All research results were reported in separate country reports. This joint report lists the highlights of the separate country reports. It will end with recommendations for the interventions to be developed in Intellectual Output 2.
MULTIFILE
Between 1 March 2021 and 30 April 2023, a consortium (consisting of in the Netherlands: the National Library of the Netherlands (Koninklijke Bibliotheek -KB), The Hague University of Applied Sciences, the Netherlands Institute for Sound and Vision in Hilversum; in Belgium: Media & Learning Association in Leuven and Public Libraries 2030 in Brussels; and in Spain: Fundación Platoniq in Barcelona) carried out an Erasmus+-funded research project on news media literacy among young people. It involved Dutch, Belgian and Spanish young people aged 12-15. The acronym SMILES, which stands for 'innovative methodS for Media & Information Literacy Education involving schools and librarieS', was chosen for the project title. The main goals of the SMILES project are: • Forming pairs between librarians and secondary school teachers in the three European countries, who were empowered through train-the-trainer workshops to teach secondary school students about news media literacy in relation to disinformation; • Helping students use digital technologies more safely and responsibly with a focus on recognising reliable and authentic information versus becoming more resilient to disinformation; • Developing five building blocks serving as teaching materials for Dutch, Belgian and Spanish pupils aged 12-15 with the aim of making them recognise disinformation and making them more resilient against it; • A scientific evaluation of the effectiveness of the implemented lessons through impact measurement using 'pre-knowledge tests' and 'post-knowledge tests'; • A strengthening of existing collaborations and creation of new collaborations between schools and libraries in the three partner countries. The SMILES project was implemented through three work packages. In the first work package, five so-called 'Baseline studies', or literature reviews, were conducted. The focus was on what the different educational approaches in Spain, Belgium and the Netherlands are with regard to disinformation and how these approaches can be linked. Based on these studies, the five building blocks were developed in the second work package. In addition, the teaching pairs were offered the training programme developed by SMILES through a 'train-the-trainer methodology' to safely and responsibly deploy the use of digital media tools during lessons with students. Also, based on the disinformation literature, the knowledge tests were designed to conduct an impact measurement of the train-the-trainer workshops and lessons among the trainers (teaching pairs) and students, respectively. These knowledge tests contained statements on disinformation that were answered correctly or incorrectly by respondents. The number of correctly answered statements prior to the lessons was compared with the number of correctly answered statements after the lessons. In this way, an attempt was made to prove a positive learning effect of the deployed lessons. In the third work package, the results from the pre-knowledge tests and the post-knowledge tests were analysed. In addition to these quantitative analyses, qualitative results were also used to analyse and look at the extent to which the training provided to trainers (teaching pairs) and the lessons with the five building blocks for students proved effective in teaching, recognising and becoming more resilient to disinformation, respectively. In doing so, we also reflect on whether the methodology tested has been effective in the three countries: what are the best practices and where do we see areas for improvement?
DOCUMENT
This study explores how TikTok Live’s fusion of immediacy, interactivity, and monetization creates a powerful infrastructure for political communication, one increasingly exploited for extremist mobilisation and disinformation. Focusing on far-right actors in Germany, it combines technical monitoring, content analysis, and policy review to examine how extremist networks exploit the platform’s live-streaming affordances to spread propaganda, monetize hate, and evade moderation, often in ways that outpace both TikTok’s self-regulation and external oversight under the EU’s Digital Services Act (DSA).
MULTIFILE
Social media platforms such as Facebook, YouTube, and Twitter have millions of users logging in every day, using these platforms for commu nication, entertainment, and news consumption. These platforms adopt rules that determine how users communicate and thereby limit and shape public discourse.2 Platforms need to deal with large amounts of data generated every day. For example, as of October 2021, 4.55 billion social media users were ac tive on an average number of 6.7 platforms used each month per internet user.3 As a result, platforms were compelled to develop governance models and content moderation systems to deal with harmful and undesirable content, including disinformation. In this study: • ‘Content governance’ is defined as a set of processes, procedures, and systems that determine how a given platform plans, publishes, moder ates, and curates content. • ‘Content moderation’ is the organised practice of a social media plat form of pre-screening, removing, or labelling undesirable content to reduce the damage that inappropriate content can cause.
MULTIFILE
Don’t mind me while I drive through your neighborhood taking photo’s of your house, gathering your emails, passwords and other private information from your wifi network. It’s nothing personal, I’m doing it to everyone, in every street, in over 30 countries. Perhaps you can also excuse me while I give access to data you and your friends shared with me and each other, to individuals and companies I have no relation to or control over at all, and while leaking your data, again it’s nothing personal, I’m doing it to 87 million others, you probably won’t mind me showing you and 126 million others some political disinformation, there’s an election coming and I could really use the money. It’s not as if we don’t know each other, I’ve been following your every move online for years now, and it’s no secret that I’m worth hundreds of billions because I sell access to you, promising my customers influence over your voting and purchasing behavior. I’ve got power. Monopolies are rare lol. If all this makes you uncomfortable, you can always cut ties with me and everyone you work and communicate online with, but what would that solve? Your friends are totally oversharing…
MULTIFILE