Disinformation and so-called fake news are contemporary phenomena with rich histories. Disinformation, or the willful introduction of false information for the purposes of causing harm, recalls infamous foreign interference operations in national media systems. Outcries over fake news, or dubious stories with the trappings of news, have coincided with the introduction of new media technologies that disrupt the publication, distribution and consumption of news -- from the so-called rumour-mongering broadsheets centuries ago to the blogosphere recently. Designating a news organization as fake, or der Lügenpresse, has a darker history, associated with authoritarian regimes or populist bombast diminishing the reputation of 'elite media' and the value of inconvenient truths. In a series of empirical studies, using digital methods and data journalism, the authors inquire into the extent to which social media have enabled the penetration of foreign disinformation operations, the widespread publication and spread of dubious content as well as extreme commentators with considerable followings attacking mainstream media as fake.
MULTIFILE
Social media platforms such as Facebook, YouTube, and Twitter have millions of users logging in every day, using these platforms for commu nication, entertainment, and news consumption. These platforms adopt rules that determine how users communicate and thereby limit and shape public discourse.2 Platforms need to deal with large amounts of data generated every day. For example, as of October 2021, 4.55 billion social media users were ac tive on an average number of 6.7 platforms used each month per internet user.3 As a result, platforms were compelled to develop governance models and content moderation systems to deal with harmful and undesirable content, including disinformation. In this study: • ‘Content governance’ is defined as a set of processes, procedures, and systems that determine how a given platform plans, publishes, moder ates, and curates content. • ‘Content moderation’ is the organised practice of a social media plat form of pre-screening, removing, or labelling undesirable content to reduce the damage that inappropriate content can cause.
MULTIFILE
In Intellectual Output 1 of the SMILES project, researchers from Belgium (Flanders), Netherlands and Spain conducted desk research to describe the current developments for each country around disinformation, particularly those related to the Covid-19 pandemic. In part 2 of the research, they identified training initiatives, courses and media literacy training tools for each country that are specifically focused on the combat against or promotion of resistance to existing disinformation. Each identified activity or tool was characterised by a fixed set of characteristics (appendix 1). In the second stage of this research, some experts for each country were interviewed. Among other things, they were asked for recommendations and tips for interventions that will be developed in Intellectual Output 2 of the SMILES project. All research results were reported in separate country reports. This joint report lists the highlights of the separate country reports. It will end with recommendations for the interventions to be developed in Intellectual Output 2.
MULTIFILE