Nobody wins if this fucking fat retarded kike repeals this shit you dumb fuck
Unless, you know, if it actually gets replaced by something that works as the functional version of 230 people want.
With all due respect to the shouting from everyone involved here, I think there does exist some better situations that would require rewriting. Mostly just 230(c)(2), I.E. the "You arn't civilly liable for deleting/restricting access to essentially anything/anyone for any reason". Okay, we can get knit-picky about the actual wording, what with its good faith and all, but when you put in "otherwise objectionable" in there it essentially gives a pass for a blanket blacklist. Really, the whole thing needs to be written to be more in line with the stated goal of "Protection for “Good Samaritan” blocking and screening of offensive material", or perhaps the goal should be reevaluated. One way or the other, the goal and the code don't match.
230(c)(1) inevitable gets dragged into things because it is the actual important bit, the part where Twitter and Alphabet and Null is not currently liable for the death threats, calls for violence, and everything else that is continuously on their site because they do not pre-moderate and approve everything before posting. 230(c)(1) specifies that, because it exists, Null isn't liable for what I say, I'm not liable for what he says, etc. However, I would like to quickly ask if anyone can point me to anything that indicates that Null would be any more liable for what I say if 230(c)(1) didn't exist. Wouldn't there need to be something else specifying him as publisher / speaker of my words for him to be held as such? Also, at what point does information I provide stop being information I provide? I mean, take my avatar for instance, I don't recall which resolution I provided it to you in. If your server resized it, does that still count as my information? Just saying, but anyways, 230(c)(1) is in general fine, and a useful line of statement to have exist.
Oh, and while I'm asking dumb legal questions, let me just throw this out here... 230(e) states that none of this has any effect on criminal law, IP law, state laws, comm privacy, or sex trafficing law. So wouldn't that mean that Twitter should still be held liable as a publisher for the child porn or terrorist materials on their site because even though 230(c)(1) says they didn't publish it 230(e) says that it doesn't effect the enforcement of that law? Like, how does that actually work? I feel I've just lapped myself and come around to the point of "Does 230(c)(1) actually do anything at all?"
Anyways, what this is all not addressing is the whole problem of monopolies/giants in the tech space. There is a desire driven by anti-authoritarian ideologies for government to impose some sort of restriction on the tech giants. Let me emphasize that for those of you who don't get the message: The tech giants are considered the big authoritarian bad guy assholes, and there exists a desire to give the government power to reign them in to protect the little guy. So to that end, I'd love to see some takes on a rewrite of 230 that attempts to address that issue. We have processes by which to change this shit, and I'll not be one to dismiss the whole thing before it has even started.