Introduction
Multiplayer online video games have been prevalent in the games industry since their invention at the end of the twentieth century. Since then, player counts have increased exponentially, and so have the multiplayer experiences available. Along with this voluminous flood of types of available content, players of every stripe emerged,
filling the available servers and building communities. In this way, many games grew outside their original scope and scale, often leading to developers struggling to keep up and to retain control. Sadly, the already widespread on the internet toxicity quickly infiltrated multiplayer games and their communities. This sparked the ongoing
fight of game developers (and players) against toxic actors, and plenty of research along with it.
Existing academic literature and game-specific approaches have predominantly attempted to combat this issue by targeting the toxic behavior and its perpetrator. This usually entails a series of warnings against exhibiting what constitutes toxic behavior, providing players with a "report functionality", and the eventual punishment of toxic individuals. This strategy, however, is very limited in its scope. For instance, toxic behavior is a term that is
difficult to concretely define [32], which leads to considerable inconsistencies between games and their interpretations of actions of the type inside the game’s context [77]. Using terms like "disruptive
behaviors" [43] instead of "toxicity" can help to some degree, but this adaptation is slow and inadequate on its own. Furthermore, every game implements its own unique reporting and punishment
system. Thus, players have to familiarize themselves with different designs, often with no assistance or guidance by the game itself. Those facts, along with poor design decisions, allow for perpetrators
to escape punishment or receive one that is disproportional to the crime, whether harsher or more lenient. Worse still, the reporting system can itself become an instrument in the perpetrators' toolbox. Most such systems can be readily abused by players, who report an innocent player until they are punished by the system due to its simple automated design nature. Finally, all the existing systems follow a reactive design, meaning that someone is going to be targeted by and experience toxic behavior, before the toxic actor is reported and -potentially- punished.
This master's thesis differentiates itself by approaching the problem from a different angle. Namely, why ask people to not be toxic (lest there be consequences), instead of promoting and rewarding prosocial behavior in the game? Multiplayer games should not connote toxicity. Rather, they should signify community and prosocial interaction.
The goal was to design and develop a system that is characterized by modularity, allowing it to be used in most (if not all) multiplayer team player-versus-player (PvP) and player-versus-environment (PvE) games. The system is a prototype with mostly actually implemented functionality, but with some simulated actions that would in a potential final version be completely automated. This detail, however, does not impact the outcome of this thesis.
The novel system is based on a points-ranking scheme, where points are added to assign ranks to players based on their in-game prosocial behavior. Players would receive specific in-game rewards exclusive to this system (simulated as part of this study).
This basis of design enables it to both appeal to players, as it satisfies both "Social" and "Achievement" in the Gamer Motivation Model, as well as be familiar to them, since it mimics a classic ranking system that they are likely to be acquainted with. The system provides a detailed analysis of all prosocial actions available in each game, along with statistics on the ones that the player has performed. The goal of these two characteristics is to help players self-reflect on their prosocial performance, as well as discover new ways to be prosocial in their favorite games.
To evaluate the effectiveness of the aforementioned system, a user study with 16 participants gathered through convenience sampling was conducted. Participants played one of three available games that have been chosen based on a number of factors: Helldivers II, Overwatch 2, and Rainbow Six: Siege. Their gameplay session was recorded, and the data related to the prosocial actions performed was extracted from it. In a second session, the data was fed into the system, which in turn was provided to the participants to explore, before they were asked to play the same game again. The recorded second session was used to investigate whether the self-reflection provided by the system and its various components had an effect on in-game prosocial behavior between the two gameplay sessions. More specifically, the following research questions are addressed:
- RQ1: How does the system designed affect the player's self-reflection when it comes to their in-game prosocial behavior?
- RQ2: How does the system designed affect the amount of prosocial behavior they exhibit in-game?
The system was also evaluated on its design and ease-of-use by the participants, to examine whether usability was -or was not- a factor in the overall assessment of the system.
