Wanneer een beslisarchitectuur is geformuleerd, dan is daarna vaak de vraag hoe iedere individueel gespecificeerde beslissing uitgewerkt dient te worden. Gaan we voor een beslissing elk van de onderliggende bedrijfsregels volledig specificeren? Dient er een predictive analytics engine te komen of is het beter om een mens de beslissing te laten nemen?
LINK
What you don’t know can’t hurt you: this seems to be the current approach for responding to disinformation by public regulators across the world. Nobody is able to say with any degree of certainty what is actually going on. This is in no small part because, at present, public regulators don’t have the slightest idea how disinformation actually works in practice. We believe that there are very good reasons for the current state of affairs, which stem from a lack of verifiable data available to public institutions. If an election board or a media regulator wants to know what types of digital content are being shared in their jurisdiction, they have no effective mechanisms for finding this data or ensuring its veracity. While there are many other reasons why governments would want access to this kind of data, the phenomenon of disinformation provides a particularly salient example of the consequences of a lack of access to this data for ensuring free and fair elections and informed democratic participation. This chapter will provide an overview of the main aspects of the problems associated with basing public regulatory decisions on unverified data, before sketching out some ideas of what a solution might look like. In order to do this, the chapter develops the concept of auditing intermediaries. After discussing which problems the concept of auditing intermediaries is designed to solve, it then discusses some of the main challenges associated with access to data, potential misuse of intermediaries, and the general lack of standards for the provision of data by large online platforms. In conclusion, the chapter suggests that there is an urgent need for an auditing mechanism to ensure the accuracy of transparency data provided by large online platform providers about the content on their services. Transparency data that have been audited would be considered verified data in this context. Without such a transparency verification mechanism, existing public debate is based merely on a whim, and digital dominance is likely to only become more pronounced.
MULTIFILE
Het aantal kinderen dat slachtoffer is van kindermishandeling en huiselijk geweld is hoog en al jaren constant. Met de komst van moderne digitale technologieën wordt voorzichtig verkend of er oplossingsrichtingen liggen ten aanzien van dit probleem. Hoewel technologieën zoals big data en machine learning potentie hebben in het analyseren van grote hoeveelheden data en dus ook in het mogelijk (eerder) signaleren van kindermishandeling, zijn er de nodige programmatische en ethische overwegingen waar rekening mee dient te worden gehouden. Indien mogelijke toepassingen nader worden verkend, is het tevens van belang dat professionals binnen het sociale domein ook kennis hebben van de werking van de diverse vormen van digitale technologie en dat er wordt intensief wordt samengewerkt met de verschillende domeinen waarin de technologie nader wordt ontworpen.
DOCUMENT