Many attempts have been made to build an artificial brain. This paper aims to contribute to the conceptualization of an artificial learning system that functionally resembles an organic brain in a number of important neuropsychological aspects. Probably the techniques (algorithms) required are already available in various fields of artificial intelligence. However, the question is how to combine those techniques. The combination of truly autonomous learning, in which "accidental" findings (serendipity) can be used without supervision, with supervised learning from both the surrounding and previous knowledge, is still very challenging. In the event of changed circumstances, network models that can not utilize previously acquired knowledge must be completely reset, while in representation-driven networks, new formation will remain outside the scope, as we will argue. In this paper considerations to make artificial learning functionally similar to organic learning, and the type of algorithm that is necessary in the different hierarchical layers of the brain are discussed. To this end, algorithms are divided into two types: conditional algorithms (CA) and completely unsupervised learning. It is argued that in a conceptualisation of an artificial device that is functional similar to an organic learning system, both conditional learning (by applying CA’s), and non-conditional (supervised) learning must be applied.
MULTIFILE
This paper introduces and contextualises Climate Futures, an experiment in which AI was repurposed as a ‘co-author’ of climate stories and a co-designer of climate-related images that facilitate reflections on present and future(s) of living with climate change. It converses with histories of writing and computation, including surrealistic ‘algorithmic writing’, recombinatory poems and ‘electronic literature’. At the core lies a reflection about how machine learning’s associative, predictive and regenerative capacities can be employed in playful, critical and contemplative goals. Our goal is not automating writing (as in product-oriented applications of AI). Instead, as poet Charles Hartman argues, ‘the question isn’t exactly whether a poet or a computer writes the poem, but what kinds of collaboration might be interesting’ (1996, p. 5). STS scholars critique labs as future-making sites and machine learning modelling practices and, for example, describe them also as fictions. Building on these critiques and in line with ‘critical technical practice’ (Agre, 1997), we embed our critique of ‘making the future’ in how we employ machine learning to design a tool for looking ahead and telling stories on life with climate change. This has involved engaging with climate narratives and machine learning from the critical and practical perspectives of artistic research. We trained machine learning algorithms (i.e. GPT-2 and AttnGAN) using climate fiction novels (as a dataset of cultural imaginaries of the future). We prompted them to produce new climate fiction stories and images, which we edited to create a tarot-like deck and a story-book, thus also playfully engaging with machine learning’s predictive associations. The tarot deck is designed to facilitate conversations about climate change. How to imagine the future beyond scenarios of resilience and the dystopian? How to aid our transition into different ways of caring for the planet and each other?
DOCUMENT
Retail industry consists of the establishment of selling consumer goods (i.e. technology, pharmaceuticals, food and beverages, apparels and accessories, home improvement etc.) and services (i.e. specialty and movies) to customers through multiple channels of distribution including both the traditional brickand-mortar and online retailing. Managing corporate reputation of retail companies is crucial as it has many advantages, for instance, it has been proven to impact generated revenues (Wang et al., 2016). But, in order to be able to manage corporate reputation, one has to be able to measure it, or, nowadays even better, listen to relevant social signals that are out there on the public web. One of the most extensive and widely used frameworks for measuring corporate reputation is through conducting elaborated surveys with respective stakeholders (Fombrun et al., 2015). This approach is valuable but deemed to be laborious and resource-heavy and will not allow to generate automatic alerts and quick and live insights that are extremely needed in this era of internet. For these purposes a social listening approach is needed that can be tailored to online data such as consumer reviews as the main data source. Online review datasets are a form of electronic Word-of-Mouth (WOM) that, when a data source is picked that is relevant to retail, commonly contain relevant information about customers’ perceptions regarding products (Pookulangara, 2011) and that are massively available. The algorithm that we have built in our application provides retailers with reputation scores for all variables that are deemed to be relevant to retail in the model of Fombrun et al. (2015). Examples of such variables for products and services are high quality, good value, stands behind, and meets customer needs. We propose a new set of subvariables with which these variables can be operationalized for retail in particular. Scores are being calculated using proportions of positive opinion pairs such as <fast, delivery> or <rude, staff> that have been designed per variable. With these important insights extracted, companies can act accordingly and proceed to improve their corporate reputation. It is important to emphasize that, once the design is complete and implemented, all processing can be performed completely automatic and unsupervised. The application makes use of a state of the art aspect-based sentiment analysis (ABSA) framework because of ABSA’s ability to generate sentiment scores for all relevant variables and aspects. Since most online data is in open form and we deliberately want to avoid labelling any data by human experts, the unsupervised aspectator algorithm has been picked. It employs a lexicon to calculate sentiment scores and uses syntactic dependency paths to discover candidate aspects (Bancken et al., 2014). We have applied our approach to a large number of online review datasets that we sampled from a list of 50 top global retailers according to National Retail Federation (2020), including both offline and online operation, and that we scraped from trustpilot, a public website that is well-known to retailers. The algorithm has carefully been evaluated by manually annotating a randomly sampled subset of the datasets for validation purposes by two independent annotators. The Kappa’s score on this subset was 80%.
MULTIFILE
In het ziekenhuis kan elke fout een leven kosten. Zo kan al een kleine bereidingsfout bij het klaarmaken van intraveneuze medicijnen (IV) leiden tot levensbedreigende omstandigheden voor de patiënt. Bereiding van dit type medicijnen gebeurt in de apotheek en op de verpleegafdeling. Met name op de verpleegafdeling is het een drukke en onvoorspelbare setting. Wereldwijd komen in deze setting ernstige bereidingsfouten nog te frequent voor. Om deze menselijke fouten te reduceren, wordt in deze KIEM aanvraag een proof-of-concept ‘slim oog’ ontwikkeld die vlak voor de toediening detecteert of de juiste dosis aanwezig is, of het type medicijn correct is en geen vervuiling aanwezig is. Het slimme oog maakt gebruik van hyperspectrale technologie en artificial intelligence, en is een samenwerking tussen de Computer Vision & Data Science afdeling van NHL Stenden Hogeschool, de automatische medicijncontrole specialist ZiuZ, en het Tjongerschans ziekenhuis. De unieke combinatie tussen nieuwe AI-technieken, hyperspectrale techniek en de toepassing op intraveneuze medicijnen is voor dit consortium technisch nieuw, en is nog niet eerder ontwikkeld voor de toepassing aan het bed of in de medicijnkamer op de verpleegafdeling. De onvoorspelbare setting en de urgentie aan het bed maakt dit onderzoek technisch uitdagend. Tevens moet het uiteindelijke device klein en draagbaar en snel werkzaam zijn. Om de grote verscheidenheid aan mogelijke gebruik scenario's en menselijke fouten te vangen in het algoritme, wordt een door NHLS ontwikkelde simulatie procedure gevolgd: met nabootsing van de praktijksituatie in samenwerking met zorgverleners, met opzettelijke fouten, en computer gegenereerde beeldmanipulatie. Het project zal geïntegreerd worden in het onderwijs volgens de design-based methode, met teams bestaande uit domein experts, bedrijven, docent-onderzoekers en studenten. Het uiteindelijke doel is om met een proof-of-concept aan-het-bed demonstrator een groot consortium van ziekenhuizen, ontwikkelaars en eindgebruikers enthousiast te maken voor een groter vervolgproject.
Huntington’s disease (HD) and various spinocerebellar ataxias (SCA) are autosomal dominantly inherited neurodegenerative disorders caused by a CAG repeat expansion in the disease-related gene1. The impact of HD and SCA on families and individuals is enormous and far reaching, as patients typically display first symptoms during midlife. HD is characterized by unwanted choreatic movements, behavioral and psychiatric disturbances and dementia. SCAs are mainly characterized by ataxia but also other symptoms including cognitive deficits, similarly affecting quality of life and leading to disability. These problems worsen as the disease progresses and affected individuals are no longer able to work, drive, or care for themselves. It places an enormous burden on their family and caregivers, and patients will require intensive nursing home care when disease progresses, and lifespan is reduced. Although the clinical and pathological phenotypes are distinct for each CAG repeat expansion disorder, it is thought that similar molecular mechanisms underlie the effect of expanded CAG repeats in different genes. The predicted Age of Onset (AO) for both HD, SCA1 and SCA3 (and 5 other CAG-repeat diseases) is based on the polyQ expansion, but the CAG/polyQ determines the AO only for 50% (see figure below). A large variety on AO is observed, especially for the most common range between 40 and 50 repeats11,12. Large differences in onset, especially in the range 40-50 CAGs not only imply that current individual predictions for AO are imprecise (affecting important life decisions that patients need to make and also hampering assessment of potential onset-delaying intervention) but also do offer optimism that (patient-related) factors exist that can delay the onset of disease.To address both items, we need to generate a better model, based on patient-derived cells that generates parameters that not only mirror the CAG-repeat length dependency of these diseases, but that also better predicts inter-patient variations in disease susceptibility and effectiveness of interventions. Hereto, we will use a staggered project design as explained in 5.1, in which we first will determine which cellular and molecular determinants (referred to as landscapes) in isogenic iPSC models are associated with increased CAG repeat lengths using deep-learning algorithms (DLA) (WP1). Hereto, we will use a well characterized control cell line in which we modify the CAG repeat length in the endogenous ataxin-1, Ataxin-3 and Huntingtin gene from wildtype Q repeats to intermediate to adult onset and juvenile polyQ repeats. We will next expand the model with cells from the 3 (SCA1, SCA3, and HD) existing and new cohorts of early-onset, adult-onset and late-onset/intermediate repeat patients for which, besides accurate AO information, also clinical parameters (MRI scans, liquor markers etc) will be (made) available. This will be used for validation and to fine-tune the molecular landscapes (again using DLA) towards the best prediction of individual patient related clinical markers and AO (WP3). The same models and (most relevant) landscapes will also be used for evaluations of novel mutant protein lowering strategies as will emerge from WP4.This overall development process of landscape prediction is an iterative process that involves (a) data processing (WP5) (b) unsupervised data exploration and dimensionality reduction to find patterns in data and create “labels” for similarity and (c) development of data supervised Deep Learning (DL) models for landscape prediction based on the labels from previous step. Each iteration starts with data that is generated and deployed according to FAIR principles, and the developed deep learning system will be instrumental to connect these WPs. Insights in algorithm sensitivity from the predictive models will form the basis for discussion with field experts on the distinction and phenotypic consequences. While full development of accurate diagnostics might go beyond the timespan of the 5 year project, ideally our final landscapes can be used for new genetic counselling: when somebody is positive for the gene, can we use his/her cells, feed it into the generated cell-based model and better predict the AO and severity? While this will answer questions from clinicians and patient communities, it will also generate new ones, which is why we will study the ethical implications of such improved diagnostics in advance (WP6).