Intel's new voice ai will block (((hate speech))) instead of doing anything useful, like noise cancelling. - No it's not a meme. I pray to christ It needs the new processors to be implemented.

I was thinking of children playing games, although, children shouldnt really be playing online with chat functions to begin with.
Online services actually made for children have systems built in like this, or at least used to. Years ago, some online-only games had safe chat features that only allowed for specifically allowed words and phrases to be said to or used by an account marked with parental controls. This makes far more sense than attempting to censor all forms of chat. Even in text it doesn't work out.
 
Attempting to use AI to censor any language is stupid as hell and only employs people to invent new slurs. What's the point of it anyway? Most games have muting functions, and all voice chat services do at this point. If someone is offending you, just mute them. It's that easy. Companies do not need to develop tools to take the agency of clicking a button away.
Exactly. Language will just start changing at breakneck speed so people can say what they really want to without getting caught by the bots.
 
intel inside your timmy.png
 
Eh, this will undoubtedly be a pretty disappointing product in practice. Yeah, you can probably filter out obvious slurs without issue but people will just innovate new ones and you really can't have an AI keep up in this arms race without having to move from simple syntax rules to more-complicated ones very quickly, and you can't bleep stuff if there's a reasonable false positive rate where you're also filtering out legitimate communication.

eg. There are probably plenty of games where telling someone to "kill themselves" is actually legitimate advice, so bleeping this as aggressive is a non-starter.

Plus people will just come up with more-complicated slurs whose insulting meaning would be clear to a regular human but not necessarily to an AI which lacks the context to know whether the insult is "legitimate" or not. eg. In places where "kill yourself" is banned people have started using "keep yourself safe" to mean the same thing. Bleep would have a hard time telling when this phrase would be used in an aggressive context.

Essentially these sorts of battles have already played out in spaces that are subject to more-traditional moderation. What we see is that just getting rid of obvious slurs isn't really that important to diminishing toxicity, since people are pretty creative and an AI just wouldn't be able to keep up.

But I'm mostly commenting here on the viability of this particular innovation, as I understand it - the censorship of realtime audio streams. There's already a lot of work that's been done in reviewing audio/text streams in an offline fashion to detect problematic content that has whatever Orwellian implications. But I'm not sure if this innovation adds that much - although China would probably love to use it if they don't have something like it already. I'm sure there are a lot of big American firms that would like to make sure that words like "Taiwan" and "Hong Kong" can never be broadcast in live events.
 
Eh, this will undoubtedly be a pretty disappointing product in practice.
Google does a phenomenal job as it is sinking wrongthink in search results. They're basically singlehandedly shaping an entire generation of people to think a certain way. They revise dictionaries in real time. It's unreal. These things just take time to get perfect and nobody can complain because that's just far-right hate speech.
 
Google does a phenomenal job as it is sinking wrongthink in search results. They're basically singlehandedly shaping an entire generation of people to think a certain way. They revise dictionaries in real time. It's unreal. These things just take time to get perfect and nobody can complain because that's just far-right hate speech.
Jesus christ dude, save some black pills for Tim Pool since it looks like you're hogging all of them.
 
Google does a phenomenal job as it is sinking wrongthink in search results. They're basically singlehandedly shaping an entire generation of people to think a certain way. They revise dictionaries in real time. It's unreal. These things just take time to get perfect and nobody can complain because that's just far-right hate speech.
Sure, like I said the tech to do this sort of censorship already exists and there's plenty to worry about in its application. But I don't see realtime audio censorship as adding too much because literally the only new action associated with it (maybe there are more outlined in the full video) is a decision to bleep, and I think people will route around it pretty easily.

Are there lots of worrisome implications to offline audio stream parsing being used to censor or disrupt? Of course. But Bleep isn't pathbreaking tech in that space. My comments are narrowly confined to what Bleep enables, not the broader issues about how AI can be applied to online communication to fuck with all sorts of things.
 
I have to wonder if they're going to catastrophically fuck this up even more than it already is by turning the woke audience against them, by trying to employ some sort of filter that attempts to determine the race of the speaker. This would then be used to try and detect if a black person has said the nigger/nigga word and allow them to say it. because it's magically only okay for those of African descent. This would then fail, obviously, not being able to properly detect anything via lack of proper training or just the limits of current machine learning, and people would manage to trick it and say the n word regardless of race. I can't see a company like Intel managing to tard wrangle themselves enough not to try something even dumber than making this in the first place not trying this too.

Outside of this hypothetical, I just can't see trying to limit voice chat of all things with machine learning being anything more than a spergfest whack a mole. Will it automatically filter slurs from other nations/languages other than english? What if they overlap with regular words in english? When people start making new 'offensive' words to get around the filters, how far will intel go to retrain the ai to try and stamp those out too? I can just see this kind of program going ham, filtering random shit that vaguely sounds the same as faggot, gay, homo, and so on. This whole development is going to be a massive waste of time on Intel's part. Is there something I'm missing and the cyberpunk dystopia of sapient ai filtering the world has somehow come to pass?
It'll be hilarious if they try to let black people say the n-word. Many people will be working on their african impressions. But banning it altogether for even blacks, isn't that some kind of x-ism?
 
It'll be hilarious if they try to let black people say the n-word. Many people will be working on their african impressions. But banning it altogether for even blacks, isn't that some kind of x-ism?
As cute as it is to imagine I guarantee that they will not do this. SJWs are not going to be outraged that they aren't exempted from the filter when they're trying to "punch up".

Also a lot of people are curious about the "White Nationalism" tag and what they could mean, but I think it's fairly obvious: There will be a list of blacklisted phrases like "Heil Hitler" and whatnot. There *will* be interesting cultural battles though when SJWs start demanding that various anti-SJWs phrases/memes be subjected to censorship under White Nationalist or WN-adjacent tags, eg. "Trump won" or "It's Okay To Be White" or whatever else. Intel probably hasn't fully grasped that if this tech catches on they're going to be swarmed with "here are the horrible things that Intel is allowing people to say" NYT/Buzzfeed hall monitor thinkpieces in a couple years.

I think the real worry of this tech is not from Bleep in its narrow form, but what will happen when related products meant to censor entire topics start hitting the market. Imagine platforms being pressured to start censoring vaccine / climate / election "misinformation" in realtime broadcasts that are being made - that's where things will start getting ugly.
 
Last edited:
Soon there'll be a law where we'll be forced to think happy and act like we live in a perfect society. Anybody who doesn't gets the death sentence.
You won't be forced to be happy, you WILL be happy because they'll put drugs in your soy and bugs and if you aren't then you'll be taken to the psychiatrist to be reprogrammed. That way you too will be happy owning nothing.
 
You won't be forced to be happy, you WILL be happy because they'll put drugs in your soy and bugs and if you aren't then you'll be taken to the psychiatrist to be reprogrammed. That way you too will be happy owning nothing.
WE GOT A DOWNA
TAKE YOUR JOY!
TAKE YOUR JOY!
TAKE YOUR JOY!
maxresdefault-1.jpg
 
Technology making it possible for people to live in their own reality in real-time :story:

nigger has had a good run but it's time for more high tech slurs. slurs so complex that no AI can ever dream of blocking them. after all, if there's no way to inform the person on the other end that they're behaving like an african american then the spirit of gaming will be dead for good.
The perfect term for this already exists; basketball americans. There is no way a word as basic as basketball gets filtered with current AIs. though given enough time they'll get good enough to pick up the context.

View attachment 2033476
Null included the first part of the list, but at the very bottom is a “white nationalism” censor list.

How the fuck is “white nationalism” able to be AI censored? If you say “Andrew Anglin” do you get sniped from your gamer chair?
No dedicated anti-semtism filter, someone plz call the splc/aclu/adl/ect.
 
WE GOT A DOWNA
TAKE YOUR JOY!
TAKE YOUR JOY!
TAKE YOUR JOY!
View attachment 2033721
Inb4 The Very Bad Thing was letting Donald Trump get elected.

This AI seems primitive now, but give it a few years. Also give it a few years and you won't have the option to change settings or turn anything off.
 
Back