Artificially intelligent agents increasingly collaborate with humans in human-agent teams. Timely proactive sharing of relevant information within the team contributes to the overall team performance. This paper presents a machine learning approach to proactive communication in AI-agents using contextual factors. Proactive communication was learned in two consecutive experimental steps: (a) multi-agent team simulations to learn effective communicative behaviors, and (b) human-agent team experiments to refine communication suitable for a human team member. Results consist of proactive communication policies for communicating both beliefs and goals within human-agent teams. Agents learned to use minimal communication to improve team performance in simulation, while they learned more specific socially desirable behaviors in the human-agent team experiment
DOCUMENT
With artificial intelligence (AI) systems entering our working and leisure environments with increasing adaptation and learning capabilities, new opportunities arise for developing hybrid (human-AI) intelligence (HI) systems, comprising new ways of collaboration. However, there is not yet a structured way of specifying design solutions of collaboration for hybrid intelligence (HI) systems and there is a lack of best practices shared across application domains. We address this gap by investigating the generalization of specific design solutions into design patterns that can be shared and applied in different contexts. We present a human-centered bottom-up approach for the specification of design solutions and their abstraction into team design patterns. We apply the proposed approach for 4 concrete HI use cases and show the successful extraction of team design patterns that are generalizable, providing re-usable design components across various domains. This work advances previous research on team design patterns and designing applications of HI systems.
MULTIFILE
Now that collaborative robots are becoming more widespread in industry, the question arises how we can make them better co-workers and team members. Team members cooperate and collaborate to attain common goals. Consequently they provide and receive information, often non-linguistic, necessary to accomplish the work at hand and coordinate their activities. The cooperative behaviour needed to function as a team also entails that team members have to develop a certain level of trust towards each other. In this paper we argue that for cobots to become trusted, successful co-workers in an industrial setting we need to develop design principles for cobot behaviour to provide legible, that is understandable, information and to generate trust. Furthermore, we are of the opinion that modelling such non-verbal cobot behaviour after animal co-workers may provide useful opportunities, even though additional communication may be needed for optimal collaboration. Marijke Bergman, Elsbeth de Joode, +1 author Janienke Sturm Published in CHIRA 2019 Computer Science
MULTIFILE