In an event related potential (ERP) experiment using written language materials only, we investigated a potential modulation of the N400 by the modality switch effect. The modality switch effect occurs when a first sentence, describing a fact grounded in one modality, is followed by a second sentence describing a second fact grounded in a different modality. For example, "A cellar is dark" (visual), was preceded by either another visual property "Ham is pink" or by a tactile property "A mitten is soft." We also investigated whether the modality switch effect occurs for false sentences ("A cellar is light"). We found that, for true sentences, the ERP at the critical word "dark" elicited a significantly greater frontal, early N400-like effect (270-370 ms) when there was a modality mismatch than when there was a modality-match. This pattern was not found for the critical word "light" in false sentences. Results similar to the frontal negativity were obtained in a late time window (500-700 ms). The obtained ERP effect is similar to one previously obtained for pictures. We conclude that in this paradigm we obtained fast access to conceptual properties for modality-matched pairs, which leads to embodiment effects similar to those previously obtained with pictorial stimuli.
LINK
Wanneer een beslisarchitectuur is geformuleerd, dan is daarna vaak de vraag hoe iedere individueel gespecificeerde beslissing uitgewerkt dient te worden. Gaan we voor een beslissing elk van de onderliggende bedrijfsregels volledig specificeren? Dient er een predictive analytics engine te komen of is het beter om een mens de beslissing te laten nemen?
LINK
What you don’t know can’t hurt you: this seems to be the current approach for responding to disinformation by public regulators across the world. Nobody is able to say with any degree of certainty what is actually going on. This is in no small part because, at present, public regulators don’t have the slightest idea how disinformation actually works in practice. We believe that there are very good reasons for the current state of affairs, which stem from a lack of verifiable data available to public institutions. If an election board or a media regulator wants to know what types of digital content are being shared in their jurisdiction, they have no effective mechanisms for finding this data or ensuring its veracity. While there are many other reasons why governments would want access to this kind of data, the phenomenon of disinformation provides a particularly salient example of the consequences of a lack of access to this data for ensuring free and fair elections and informed democratic participation. This chapter will provide an overview of the main aspects of the problems associated with basing public regulatory decisions on unverified data, before sketching out some ideas of what a solution might look like. In order to do this, the chapter develops the concept of auditing intermediaries. After discussing which problems the concept of auditing intermediaries is designed to solve, it then discusses some of the main challenges associated with access to data, potential misuse of intermediaries, and the general lack of standards for the provision of data by large online platforms. In conclusion, the chapter suggests that there is an urgent need for an auditing mechanism to ensure the accuracy of transparency data provided by large online platform providers about the content on their services. Transparency data that have been audited would be considered verified data in this context. Without such a transparency verification mechanism, existing public debate is based merely on a whim, and digital dominance is likely to only become more pronounced.
MULTIFILE