It is crucial that ASR systems can handle the wide range of variations in speech of speakers from different demographic groups, with different speaking styles, and of speakers with (dis)abilities. A potential quality-of-service harm arises when ASR systems do not perform equally well for everyone. ASR systems may exhibit bias against certain types of speech, such as non-native accents, different age groups and gender. In this study, we evaluate two widely-used neural network-based architectures: Wav2vec2 and Whisper on potential biases for Dutch speakers. We used the Dutch speech corpus JASMIN as a test set containing read and conversational speech in a human-machine interaction setting. The results reveal a significant bias against non-natives, children and elderly and some regional dialects. The ASR systems generally perform slightly better for women than for men.
MULTIFILE
Albeit the widespread application of recommender systems (RecSys) in our daily lives, rather limited research has been done on quantifying unfairness and biases present in such systems. Prior work largely focuses on determining whether a RecSys is discriminating or not but does not compute the amount of bias present in these systems. Biased recommendations may lead to decisions that can potentially have adverse effects on individuals, sensitive user groups, and society. Hence, it is important to quantify these biases for fair and safe commercial applications of these systems. This paper focuses on quantifying popularity bias that stems directly from the output of RecSys models, leading to over recommendation of popular items that are likely to be misaligned with user preferences. Four metrics to quantify popularity bias in RescSys over time in dynamic setting across different sensitive user groups have been proposed. These metrics have been demonstrated for four collaborative filteri ng based RecSys algorithms trained on two commonly used benchmark datasets in the literature. Results obtained show that the metrics proposed provide a comprehensive understanding of growing disparities in treatment between sensitive groups over time when used conjointly.
DOCUMENT
This paper conducted a preliminary study of reviewing and exploring bias strategies using a framework of a different discipline: change management. The hypothesis here is: If the major problem of implicit bias strategies is that they do not translate into actual changes in behaviors, then it could be helpful to learn from studies that have contributed to successful change interventions such as reward management, social neuroscience, health behavioral change, and cognitive behavioral therapy. The result of this integrated approach is: (1) current bias strategies can be improved and new ones can be developed with insight from adjunct study fields in change management; (2) it could be more sustainable to invest in a holistic and proactive bias strategy approach that targets the social environment, eliminating the very condition under which biases arise; and (3) while implicit biases are automatic, future studies should invest more on strategies that empower people as “change agents” who can act proactively to regulate the very environment that gives rise to their biased thoughts and behaviors.
DOCUMENT