There is not much research on team collaboration in digital entertainment games, nor is there much evidence for the efficacy of game-based team training or the validity of game-based team assessment. This is a shortcoming because of an increasing pervasiveness of serious games in organizational life, e.g. for operational training, management and leadership. Is it possible to establish marked relationships between psychometric constructs that measure ‘team composition and performance’ and ‘analytics’ that unobtrusively measure gameplay performance? If so, what are the implications for game-based team research and assessment? The authors conducted explorative, quasi-experimental (field) experiments with the multiplayer serious game TeamUp. One field experiment was conducted with 150 police officers as part of task-specific twoday team training. Research data were gathered through pre-game and postgame questionnaires on team constructs such as ‘psychological safety’ and ‘team cohesion’. A large quantity of in-game data was logged to construct indicators like ‘time needed to complete the task’, ‘speak time’ and ‘avoidable mistakes’ to measure team performance. The conclusion of the analysis is that ‘team cohesion’ and ‘psychological safety’ correlate moderately and significantly with in-game performance indicators. Teams with an unequal individual game performance speak the most, while teams with an equally low or equally high individual performance spend significantly less time speaking. The indicative findings support the need to further develop validated analytics and gamebased environments for team research and assessment.
Background. Despite the increasing pervasiveness of digital entertainment and serious games in organisational life, there is little evidence for the validity of game-based team training and assessment. Aim. The authors used the game, TEAMUP for a series of team training and assessment sessions, while at the same time researching the internal validity of the game for this purpose. Method. A total of 106 sets of data on games played by teams of professionals (police officers, auditors, consultants, etc.) and undergraduates and postgraduates (in aerospace engineering, entrepreneurship, etc.) were gathered for analysis through pre- and post-game questionnaires focusing on constructs for team quality, such as psychological safety and team cohesiveness. In addition, a large quantity of such data as time to complete task, distance and avoidable mistakes were logged to measure in-game team performance. Correlation and regression analyses were conducted to find relationships between team structure factors, team quality constructs and in-game performance measures. Results. The main finding is that the in-game performance measure ‘avoidable mistakes’ (a proxy for task quality) correlates markedly and pervasively with ‘team cohesiveness’. More important, the findings support the premise that in-game assessment can be internally valid for team research and assessment purposes.