Riot Games working with Ubisoft is a partnership I didn’t think I would ever write about. And yet, here we are!
Today, the two competitive gaming and esports-minded developers are announcing a joint research project called “Zero Harm in Comms.”
The goal is to create a shared database of anonymized data used to train Ubisoft and Riot’s systems to detect and mitigate disruptive behaviour.
According to Ubisoft’s press release, the idea to help improve the AI prediction and learning systems when dealing with harm came from conversations between the Executive Director for Ubisoft’s La Forge R&D Department, Yves Jacquier, and Riot’s Games’ Head of Tech Research Wesley Kerr.
“We agreed that the solutions that we can use today are not sufficient for the kind of player safety we have in mind for our players,” says Jacquier.
“We really recognized that this is a bigger problem than one company can solve,” says Kerr. And so, how do we come together and start getting a good handhold on the problem we’re trying to solve? How can we go after those problems, and then further push the entire industry forward?”
The two are hoping that the answers are in the chat logs from their IPs. So they will start with these logs as data, scrubbed clean of any personal information or identifiers. The data will then be labelled based on harms like racism, sexism and ablism and used to train in-game AI to detect these harms earlier.
“There are keywords that can be immediately recognized as bad,” elaborates Jacquier. “However, it’s often much trickier to parse. For example, if you see ‘I’m going take you out’ in a chat, what does that mean? Is it part of the fantasy? If you’re playing a competitive shooter, it might not be a problem, but if it’s another type of game, the context might be totally different.”
It’s a start and one that both companies want to be extremely visible to their players to encourage a more welcoming gaming experience. “We want players to know we are taking action on this,” says Kerr.
The two have already been working on “Zero Harm in Comms” for six months and plan to share more results next year.