Presented by Modulate
The trust and safety team of social gaming platform Rec Room has seen tremendous results in reducing toxicity over the past 18 months. In this VB Spotlight, dive into the metrics, tools and strategies they used to make players happier, increase engagement and change the game.
Improving player experience and safety should be top of mind for game developers. In this recent VB Spotlight, Mark Frumkin, director of account management at Modulate and Yasmin Hussain, head of trust and safety at Rec Room, talked about protecting players from toxicity, as seen through the lens of the Rec Room’s trust and safety team and their work with ToxMod, a proactive voice chat moderation solution powered by machine learning.
Launched in 2016, Rec Room is a social gaming platform with more than 100M lifetime users. Players interact in real time through both text and voice chat across PC, mobile, VR headsets and console, using avatars to make the experience come alive.
“Rec Room was created in order to create a space for millions of worlds and different rooms — not just that we create, but that our players can create,” Hussain said. “Trust and safety is a critical part of that.”
But real world, real-time interactions with voice chat mean there’s the inevitable cohort of people behaving badly. How do you change the behavior of players who aren’t upholding community standards?
Over the last year of experimentation and iteration on this idea, Rec Room reduced instances of toxic voice chat by around 70%, Hussain said, but it didn’t happen instantly.
Combating toxicity one step at a time
The first step was to extend continuous voice moderation coverage across all public rooms. That helped maintain consistency about the platform’s expectations for behavior. The next step was nailing down the most effective response when players swerved out of line. The team ran a broad array of tests, from different mute and ban lengths, to two variations of warnings — a very strict warning, and one which offered positive encouragement about the kind of behavior they wanted to see.
They found that when they were instantly detecting violations, the one-hour mute had a huge impact on reducing bad behavior. It was an immediate and very tangible reminder to players that toxicity wouldn’t be tolerated. Not only did that real-time feedback change the way players were behaving in that moment, it also kept them in the game, Hussain said.
It wasn’t a full-on cure for toxicity in the game, but it made a big dent in the issue. When they dug in, they found that a very small percentage of the player base was responsible for more than half of the violations. How could they directly address that specific cohort?
“There was a disproportionate connection between these very small player cohorts and a very large number of violations, which then gave us the cue to set up a further experiment,” she said. “If we change how we intervene — give you a mute the first time, or give you a warning, and then mute you again and again, but you’re not learning that lesson — perhaps we can start stacking our interventions where they strengthen each other. We’re seeing some great results from that.”
Creating and running test and safety experiments
There are specific metrics to track in order to iterate on player moderation strategies, Frumkin said. That includes the profile and the prevalence of toxicity: What are people saying? How often are they saying it? Who are these rule-breakers, how many are there and how often do they violate the code of conduct?
At the start, you also need to be very clear about what the hypothesis is, what behavior you’re trying to change, what outcome you’re looking for and what success looks like.
“The hypothesis is key,” Hussain said. “When we were testing the interventions and the right way to reduce violations to start with, that was very different than when we were trying to change the behavior of a subset of our player population.”
Iteration is also key — to learn, fine tune and tweak — but so is ensuring your experiments are running long enough to grab the data you’ll need as well as impact player behaviors.
“We want them to stay within the community standards and be positive members of that community. That means unlearning certain things that they might have been doing for a while,” she said. “We need that three, four, six weeks for that to play out as people experience this new normal that they’re in and learn from it and change what they’re doing.”
However, there’s always more to be done. Sometimes you make progress on one specific issue, but then the problem evolves. That means always improving moderation strategies and evolving alongside. For instance, moderating speech in real time is a tremendous challenge, but the Rec Room team is extremely confident that their interventions are now accurate and their players feel safer.
“We’ve had this tremendous success in driving down the number of violations and improving how our platform feels — around 90 percent of our players report feeling safe and welcome and having fun in Rec Room, which is incredible,” she said. “What we’re finding is that it’s not just enough for justice to be done, or for us to encourage our players to change their behavior. Other players need to see that happening so they also get reassurance and affirmation that we are upholding our community standards.”
The future of AI-powered voice moderation
To ultimately make Rec Room an even safer and more fun place, ToxMod continuously analyzes the data around policy violations, language, and player interactions, Frumkin said. But moderation should also evolve. You want to discourage behavior that violates standards and codes of conduct — but you also want to encourage behavior that improves the vibe or improves the experience for other Rec Room players.
“We’re also starting to develop our ability to identify pro-social behavior,” he added. “When players are good partners, when they’re supportive of other members in the same space — good at de-escalating certain situations that tend to rise in temperature — we want to be able to not just point out where problems are, but we want to be able to point out where the role models are as well. There’s a lot you can do to increase the impact and amplify the impact of those positive influences within your community.”
Voice moderation is extremely complex, especially for real-time audio, but AI-powered tools are making a tremendous impact on moderation strategies, and what teams can actually achieve.
“It means that you can raise your ambitions. Things you thought were impossible yesterday suddenly become possible as you start doing them,” Hussain said. “We’re seeing that with how available, how efficient, how effective the wider range of machine learning is becoming. There’s a huge opportunity for us to leverage that and keep our community as safe as we can.”
To learn more about the challenges of toxicity in games, strategies to effectively change player behavior and how machine learning has changed the game, don’t miss this VB Spotlight, free on demand.
Agenda
- How voice moderation works to detect hate and harassment
- Rec Room’s success and learnings in building a voice moderation strategy
- Essential insights gained from voice moderation data every game developer should be gathering
- How reducing toxicity can increase player retention and engagement
Presenters
- Yasmin Hussain, Head of Trust & Safety, Rec Room
- Mark Frumkin, Director of Account Management, Modulate
- Rachel Kaser, Technology Writer, VentureBeat (Moderator)
Source link
lol