This research aims to find relevant evidence on whether there is a link between air capacity management (ACM) optimization and airline operations, also considering the airline business model perspective. The selected research strategy includes a case study based on Paris Charles de Gaulle Airport to measure the impact of ACM optimization variables on airline operations. For the analysis we use historical data which allows us to evaluate to what extent the new schedule obtained from the optimized scenario disrupts airline planned operations. The results of this study indicate that ACM optimization has a substantial impact on airline operations. Moreover, the airlines were categorized according to their business model, so that the results of this study revealed which category was the most affected. In detail, this study revealed that, on the one hand, Full-Service Cost Carriers (FSCCs) were the most impacted and the presented ACM optimization variables had a severe impact on slot allocation (approximately 50% of slots lost), fuel burn accounted as extra flight time in the airspace (approximately 12 min per aircraft) and disrupted operations (approximately between 31% and 39% of the preferred assigned runways were changed). On the other hand, the comparison shows that the implementation of an optimization model for managing the airport capacity, leads to a more balanced usage of runways and saves between 7% and 8% of taxi time (which decreases fuel emission).
MULTIFILE
Content moderation is commonly used by social media platforms to curb the spread of hateful content. Yet, little is known about how users perceive this practice and which factors may influence their perceptions. Publicly denouncing content moderation—for example, portraying it as a limitation to free speech or as a form of political targeting—may play an important role in this context. Evaluations of moderation may also depend on interpersonal mechanisms triggered by perceived user characteristics. In this study, we disentangle these different factors by examining how the gender, perceived similarity, and social influence of a user publicly complaining about a content-removal decision influence evaluations of moderation. In an experiment (n = 1,586) conducted in the United States, the Netherlands, and Portugal, participants witnessed the moderation of a hateful post, followed by a publicly posted complaint about moderation by the affected user. Evaluations of the fairness, legitimacy, and bias of the moderation decision were measured, as well as perceived similarity and social influence as mediators. The results indicate that arguments about freedom of speech significantly lower the perceived fairness of content moderation. Factors such as social influence of the moderated user impacted outcomes differently depending on the moderated user’s gender. We discuss implications of these findings for content-moderation practices.
Over the past decade, a growing number of artists and critical practitioners have become engaged with algorithms. This artistic engagement has resulted in algorithmic theatre, bot art, and algorithmic media and performance art of various kinds that thematise the dissemination and deployment of algorithms in everyday life. Especially striking is the high volume of artistic engagements with facial recognition algorithms, trading algorithms and search engine algorithms over the past few years.The fact that these three types of algorithms have garnered more responses than other types of algorithms suggests that they form a popular subject of artistic critique. This critique addresses several significant, supra-individual anxieties of our decade: socio- political uncertainty and polarisation, the global economic crisis and cycles of recession, and the centralisation and corporatisation of access to online information. However, the constituents of these anxieties — which seem to be central to our experience of algorithmic culture — are rarely interrogated. They, therefore, merit closer attention.This book uses prominent artistic representations of facial recognition algorithms, trading algorithms, and search algorithms as the entry point into an exploration of the constituents of the anxieties braided around these algorithms. It proposes that the work of Søren Kierkegaard—one of the first theorists of anxiety—helps us to investigate and critically analyse the constituents of ‘algorithmic anxiety’.
MULTIFILE
Moderatie van lezersreacties onder nieuwsartikelen is erg arbeidsintensief. Met behulp van kunstmatige intelligentie wordt moderatie mogelijk tegen een redelijke prijs. Aangezien elke toepassing van kunstmatige intelligentie eerlijk en transparant moet zijn, is het belangrijk om te onderzoeken hoe media hieraan kunnen voldoen.
Moderatie van lezersreacties onder nieuwsartikelen is erg arbeidsintensief. Met behulp van kunstmatige intelligentie wordt moderatie mogelijk tegen een redelijke prijs. Aangezien elke toepassing van kunstmatige intelligentie eerlijk en transparant moet zijn, is het belangrijk om te onderzoeken hoe media hieraan kunnen voldoen.Doel Dit promotieproject zal zich richten op de rechtvaardigheid, accountability en transparantie van algoritmische systemen voor het modereren van lezersreacties. Het biedt een theoretisch kader en bruikbare matregelen die nieuwsorganisaties zullen ondersteunen in het naleven van recente beleidsvorming voor een waardegedreven implementatie van AI. Nu steeds meer nieuwsmedia AI gaan gebruiken, moeten ze rechtvaardigheid, accountability en transparantie in hun gebruik van algoritmen meenemen in hun werkwijzen. Resultaten Hoewel moderatie met AI zeer aantrekkelijk is vanuit economisch oogpunt, moeten nieuwsmedia weten hoe ze onnauwkeurigheid en bias kunnen verminderen (fairness), de werking van hun AI bekendmaken (accountability) en de gebruikers laten begrijpen hoe beslissingen via AI worden genomen (transparancy). Dit proefschrift bevordert de kennis over deze onderwerpen. Looptijd 01 februari 2022 - 01 februari 2025 Aanpak De centrale onderzoeksvraag van dit promotieonderzoek is: Hoe kunnen en moeten nieuwsmedia rechtvaardigheid, accountability en transparantie in hun gebruik van algoritmes voor commentmoderatie? Om deze vraag te beantwoorden is het onderzoek opgesplitst in vier deelvragen. Hoe gebruiken nieuwsmedia algoritmes voor het modereren van reacties? Wat kunnen nieuwsmedia doen om onnauwkeurigheid en bias bij het modereren via AI van reacties te verminderen? Wat moeten nieuwsmedia bekendmaken over hun gebruik van moderatie via AI? Wat maakt uitleg van moderatie via AI begrijpelijk voor gebruikers van verschillende niveaus van digitale competentie?