🐱 ‘Valorant' Will Use Your Voice to Train AI to Detect 'Disruptive Behavior'

CatParty


Starting July 13, Valorant will begin listening and recording in-game voice communication with the goal of training a language model to help it identify toxic chat. This system will only work in North American/ English-only servers. The goal is to launch the language model later this year in an effort to clamp down on toxicity in the first person shooter.


Like in any first person shooter that lets players talk to each other, the voice chat in Riot Games’ Valorant can be toxic. It’s also incredibly hard to moderate. When someone drops a racial slur in text chat, a clear log is maintained that mods can look through later. But the processing and storage power required to do the same for voice chat just isn’t possible. “Voice chat abuse is significantly harder to detect compared to text (and often involves a more manual process),” Riot Games said in a February 2022 blog post.
Riot first indicated it would do something about abusive voice chat in February 2022. “Last year Riot updated its Privacy Notice and Terms of Service to allow us to record and evaluate voice comms when a report for disruptive behavior is submitted, starting with Valorant,” it said at the time. “Please note that this will be an initial attempt at piloting a new idea leveraging brand new tech that is being developed, so the feature may take some time to bake and become an effective tool to use in our arsenal. We’ll update you with concrete plans about how it’ll work well before we start collecting voice data in any form.”
Now we know what that brand new tech is: some kind of language model that automatically detects toxic voice chat and stores it for later evaluation. The updated terms of service applied to all of Riot’s games, but it said its current plan was to use Valorant to test the software solution before rolling it out to other games.
The ability to detect keywords from live conversations is not new, however. Federal and state governments have been using similar systems to monitor phone calls from prisons and jails for at least several years—sometimes with the ability to automatically disconnect and report callswhen certain words or phrases are detected.
Riot Games did not share details of the language model and did not immediately respond to Motherboard’s request for comment. According to a post announcing the training of the language model, this is all part of “a larger effort to combat disruptive behavior,” that will allow Riot Games to “record and evaluate in-game voice communications when a report for that type of behavior is submitted.”
The updated terms of service had some more specifics. “When a player submits a report for disruptive or offensive behavior in voice comms, the relevant audio data will be stored in your account’s registered region and evaluated to see if our behavior agreement was violated,” the TOS said. “If a violation is detected, we’ll take action. After the data has been made available to the player in violation (and is no longer needed for reviews) the data will be deleted, similar to how we currently handle text-based chat reports. If no violation is detected, or if no report is filed in a timely manner, the data will be deleted.”
Riot Games said it would only “monitor” voice chats if a report had been submitted. “We won’t actively monitor your live game comms. We’ll only potentially listen to and review voice logs when disruptive voice behavior is reported,” it said in a Q&A about the changes. That is still monitoring, though, even if it’s not active. What this probably means is that a human won’t listen to it unless there’s a report—but that doesn’t mean a computer isn’t always listening.
The only way to avoid this, Riot Games said, was to not use its in-game voice chat systems. Monitoring starts on July 13. “Voice evaluation during this period will not be used for disruptive behavior reports,” Riot Games said in the post announcing the project. “This is brand new tech and there will for sure be growing pains. But the promise of a safer and more inclusive environment for everyone who chooses to play is worth it.”
 
First off, this is just an exercise to train an ai to detect words against CCP dissonants.

Second, this won't solve "toxic chat". It'll just force players to use other words that the ai doesn't know is supposed to be toxic. Its a Hydra. Cut off one head, another two grow in its place. That's not discounting players will go to mumble, discord, teamspeak, or some other VoIP chat.
 
First off, this is just an exercise to train an ai to detect words against CCP dissonants.

Second, this won't solve "toxic chat". It'll just force players to use other words that the ai doesn't know is supposed to be toxic. Its a Hydra. Cut off one head, another two grow in its place. That's not discounting players will go to mumble, discord, teamspeak, or some other VoIP chat.
It’s all so tiresome yet no one wants to admit the easiest solution is to tell the crybabies to shut the fuck up and learn to deal with it.
 
Video games are gay and so are adults who play them, I will laugh at everyone who takes this seriously and especially laugh at people who get banned for saying the gamer word, ESPECIALLY if they lose money over it, idk how that works.

Also:
那個 那個 那個 那個 那個 那個 那個 那個 那個 那個 那個 那個 那個 那個 那個
 
I've been in games where a guy has an open mic and is openly beating his wife more than once. Will that count as disruptive behavior?
If they were a troon or lgbtbbq no.

Edit:
动态网自由门 天安門 天安门 法輪功 李洪志 Free Tibet 六四天安門事件 The Tiananmen Square protests of 1989 天安門大屠殺 The Tiananmen Square Massacre 反右派鬥爭 The Anti-Rightist Struggle 大躍進政策 The Great Leap Forward 文化大革命 The Great Proletarian Cultural Revolution 人權 Human Rights 民運 Democratization 自由 Freedom 獨立 Independence 多黨制 Multi-party system 台灣 臺灣 Taiwan Formosa 中華民國 Republic of China 西藏 土伯特 唐古特 Tibet 達賴喇嘛 Dalai Lama 法輪功 Falun Dafa 新疆維吾爾自治區 The Xinjiang Uyghur Autonomous Region 諾貝爾和平獎 Nobel Peace Prize 劉暁波 Liu Xiaobo 民主 言論 思想 反共 反革命 抗議 運動 騷亂 暴亂 騷擾 擾亂 抗暴 平反 維權 示威游行 李洪志 法輪大法 大法弟子 強制斷種 強制堕胎 民族淨化 人體實驗 肅清 胡耀邦 趙紫陽 魏京生 王丹 還政於民 和平演變 激流中國 北京之春 大紀元時報 九評論共産黨 獨裁 專制 壓制 統一 監視 鎮壓 迫害 侵略 掠奪 破壞 拷問 屠殺 活摘器官 誘拐 買賣人口 遊進 走私 毒品 賣淫 春畫 賭博 六合彩 天安門 天安门 法輪功 李洪志 Winnie the Pooh 劉曉波动态网自由门
 
Just because, here's the words that frequently get Chinese, Korean, and Japanese in trouble. Naega(Korean for I, me), Nai ge(Chinese for "that one". Usually a filler word), Nigeru(Japanese for run away). In Korean's case, there is a different word. Jaega. It is typically used by older people. It probably would still get confused for a racial slur thanks to jigaboo.
 
Last edited:
It would have been funny if sane users would have reported every black person who says nigger and then the algo would only target them, but in the end it will be the bias that the worse people are the ones who will routinely report others and thus feed data to the algo
 
Back