Social media platforms such as Facebook, YouTube, and Twitter have millions of users logging in every day, using these platforms for commu nication, entertainment, and news consumption. These platforms adopt rules that determine how users communicate and thereby limit and shape public discourse.2 Platforms need to deal with large amounts of data generated every day. For example, as of October 2021, 4.55 billion social media users were ac tive on an average number of 6.7 platforms used each month per internet user.3 As a result, platforms were compelled to develop governance models and content moderation systems to deal with harmful and undesirable content, including disinformation. In this study: • ‘Content governance’ is defined as a set of processes, procedures, and systems that determine how a given platform plans, publishes, moder ates, and curates content. • ‘Content moderation’ is the organised practice of a social media plat form of pre-screening, removing, or labelling undesirable content to reduce the damage that inappropriate content can cause.
MULTIFILE
Content moderation is commonly used by social media platforms to curb the spread of hateful content. Yet, little is known about how users perceive this practice and which factors may influence their perceptions. Publicly denouncing content moderation—for example, portraying it as a limitation to free speech or as a form of political targeting—may play an important role in this context. Evaluations of moderation may also depend on interpersonal mechanisms triggered by perceived user characteristics. In this study, we disentangle these different factors by examining how the gender, perceived similarity, and social influence of a user publicly complaining about a content-removal decision influence evaluations of moderation. In an experiment (n = 1,586) conducted in the United States, the Netherlands, and Portugal, participants witnessed the moderation of a hateful post, followed by a publicly posted complaint about moderation by the affected user. Evaluations of the fairness, legitimacy, and bias of the moderation decision were measured, as well as perceived similarity and social influence as mediators. The results indicate that arguments about freedom of speech significantly lower the perceived fairness of content moderation. Factors such as social influence of the moderated user impacted outcomes differently depending on the moderated user’s gender. We discuss implications of these findings for content-moderation practices.
DOCUMENT
Journalists in the 21st century are expected to work for different platforms, gather online information, become multi‐media professionals, and learn how to deal with amateur contributions. The business model of gathering, producing and distributing news changed rapidly. Producing content is not enough; moderation and curation are at least as important when it comes to working for digital platforms. There is a growing pressure on news organizations to produce more inexpensive content for digital platforms, resulting in new models of low‐cost or even free content production. Aggregation, either by humans or machines ‘finding’ news and re‐publishing it, is gaining importance. At so‐called ‘content farms’ freelancers, part‐timers and amateurs produce articles that are expected to end up high in web searches. Apart from this low‐pay model a no‐pay model emerged were bloggers write for no compensation at all. At the Huffington Post thousands of bloggers actually work for free. Other websites use similar models, sometimes offering writers a fixed price depending on the number of clicks a page gets. We analyse the background, the consequences for journalists and journalism and the implications for online news organizations. We investigate aggregation services and content farms and no‐pay or low‐pay news websites that mainly use bloggers for input.
DOCUMENT