Key to reinforcement learning in multi-agent systems is the ability to exploit the fact that agents only directly influence only a small subset of the other agents. Such loose couplings are often modelled using a graphical model: a coordination graph. Finding an (approximately) optimal joint action for a given coordination graph is therefore a central subroutine in cooperative multi-agent reinforcement learning (MARL). Much research in MARL focuses on how to gradually update the parameters of the coordination graph, whilst leaving the solving of the coordination graph up to a known typically exact and generic subroutine. However, exact methods { e.g., Variable Elimination { do not scale well, and generic methods do not exploit the MARL setting of gradually updating a coordination graph and recomputing the joint action to select. In this paper, we examine what happens if we use a heuristic method, i.e., local search, to select joint actions in MARL, and whether we can use outcome of this local search from a previous time-step to speed up and improve local search. We show empirically that by using local search, we can scale up to many agents and complex coordination graphs, and that by reusing joint actions from the previous time-step to initialise local search, we can both improve the quality of the joint actions found and the speed with which these joint actions are found.
LINK
Just-in-time adaptive intervention (JITAI) has gained attention recently and previous studies have indicated that it is an effective strategy in the field of mobile healthcare intervention. Identifying the right moment for the intervention is a crucial component. In this paper the reinforcement learning (RL) technique has been used in a smartphone exercise application to promote physical activity. This RL model determines the ‘right’ time to deliver a restricted number of notifications adaptively, with respect to users’ temporary context information (i.e., time and calendar). A four-week trial study was conducted to examine the feasibility of our model with real target users. JITAI reminders were sent by the RL model in the fourth week of the intervention, while the participants could only access the app’s other functionalities during the first 3 weeks. Eleven target users registered for this study, and the data from 7 participants using the application for 4 weeks and receiving the intervening reminders were analyzed. Not only were the reaction behaviors of users after receiving the reminders analyzed from the application data, but the user experience with the reminders was also explored in a questionnaire and exit interviews. The results show that 83.3% reminders sent at adaptive moments were able to elicit user reaction within 50 min, and 66.7% of physical activities in the intervention week were performed within 5 h of the delivery of a reminder. Our findings indicated the usability of the RL model, while the timing of the moments to deliver reminders can be further improved based on lessons learned.
DOCUMENT
Artificially intelligent agents increasingly collaborate with humans in human-agent teams. Timely proactive sharing of relevant information within the team contributes to the overall team performance. This paper presents a machine learning approach to proactive communication in AI-agents using contextual factors. Proactive communication was learned in two consecutive experimental steps: (a) multi-agent team simulations to learn effective communicative behaviors, and (b) human-agent team experiments to refine communication suitable for a human team member. Results consist of proactive communication policies for communicating both beliefs and goals within human-agent teams. Agents learned to use minimal communication to improve team performance in simulation, while they learned more specific socially desirable behaviors in the human-agent team experiment
DOCUMENT