"Mad at the Internet" - a/k/a My Psychotherapy Sessions

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
The algorithm is the sole creation of the platforms. It is their way of showing you what content it deems you need to see. The argument goes that this way the harm is not done by a third party (which doesn't even have any control over the algorithm), but by the platform itself. If memory serves, this is the first court that found that this argument had merit.
It seems a very... strange argument. If you watched nothing but My Little Pony shit on YouTube, that's what "the algorithm" would feed you. Garbage in, garbage out. That's pretty well every social media site. It depends on what you tell the machine.

My question is how does this affect the farms?
 
If you watched nothing but My Little Pony shit on YouTube, that's what "the algorithm" would feed you.
Yeah, but Alphabet specifically designed the algorithm to work a specific way. If it lead you to a content that harmed you, it would be weird to say Alphabet had no direct action. That doesn't necessarily mean that it would end up liable in the end, but the argument doesn't seem outrageous.

Put it another way, if Alphabet designs an algorithm specifically to promote/teach/do illegal activity, it would be weird to totally insulate it from the underlying crimes.

Of course, my hypothetical is super specific, and not that easily applicable, but you get the idea.

I don't think this will survive on appeal though unless the evidence presented was really good.

My question is how does this affect the farms?
I don't see how it affects the farms at all. The only similar feature would be the feature system, but that can only be done by Mods, so if anyone were to sue for something written in a feature, 230 wouldn't apply.
 
  • Thunk-Provoking
Reactions: WelperHelper99
If you watched nothing but My Little Pony shit on YouTube, that's what "the algorithm" would feed you.
Except it wouldn't for YouTube. Retention is the name of the game and there are only so much related MLP content on a platform--as well as only so much as any one individual actually wants to consume. The algo will start spitting related trash towards you to try and keep you there. Occasionally it will spit seemingly random content that users like you clicked on, or other viral content to keep you there. If you interact with that content you get more tailored, but broader shit. Potentially "harmful" content starts being fed to you as you engage. The algo is strong. What you're describing is how the algo worked in its infancy. As it's automated and the user chooses what to click on I still don't really buy their argument, it seems to be a stretch. KF is unaffected whichever way this goes.
 
Yeah, but Alphabet specifically designed the algorithm to work a specific way. If it lead you to a content that harmed you, it would be weird to say Alphabet had no direct action. That doesn't necessarily mean that it would end up liable in the end, but the argument doesn't seem outrageous.

Put it another way, if Alphabet designs an algorithm specifically to promote/teach/do illegal activity, it would be weird to totally insulate it from the underlying crimes.

Of course, my hypothetical is super specific, and not that easily applicable, but you get the idea.

I don't think this will survive on appeal though unless the evidence presented was really good.
I guess that makes some sense. The one outlier I find is 4chan. People (and feds) post shit. It then gets bumped up and down based on activity, and eventually fades away when it hits the bump limit or the mods will it. That seems more like random activity than a algorithm.
I don't see how it affects the farms at all. The only similar feature would be the feature system, but that can only be done by Mods, so if anyone were to sue for something written in a feature, 230 wouldn't apply.
Well that is good to know.
 
Except it wouldn't for YouTube. Retention is the name of the game and there are only so much related MLP content on a platform--as well as only so much as any one individual actually wants to consume. The algo will start spitting related trash towards you to try and keep you there. Occasionally it will spit seemingly random content that users like you clicked on, or other viral content to keep you there. If you interact with that content you get more tailored, but broader shit. Potentially "harmful" content starts being fed to you as you engage. The algo is strong. What you're describing is how the algo worked in its infancy. As it's automated and the user chooses what to click on I still don't really buy their argument, it seems to be a stretch. KF is unaffected whichever way this goes.
I mean somewhat. It depends I'd say in how old your account is tbh. I've had mine for at least a decade. I get a few random recommendations still, but even then, it's in topics I'm interested in. In order to get youtube to pump out, idk, isis beheading videos, I'd have to manually, deliberately, punch that in. It knows I don't want that and have zero interest. My point is the guy chose to radicalize himself.
 
Honestly I feel like being sick and only doing one stream this week is a decent enough way for Null to transition into doing the one-a-week streams he talked about before, personally I enjoy longer streams that go into more detail and encapsulate more information (which hopefully would be the compromise when dear feeder begins doing it once a week) and we've already gone a week or so without the grand media empire of Mad At The Internet burning down
 
Yeah, but Alphabet specifically designed the algorithm to work a specific way. If it lead you to a content that harmed you, it would be weird to say Alphabet had no direct action. That doesn't necessarily mean that it would end up liable in the end, but the argument doesn't seem outrageous.

Put it another way, if Alphabet designs an algorithm specifically to promote/teach/do illegal activity, it would be weird to totally insulate it from the underlying crimes.

Of course, my hypothetical is super specific, and not that easily applicable, but you get the idea.

I don't think this will survive on appeal though unless the evidence presented was really good.


I don't see how it affects the farms at all. The only similar feature would be the feature system, but that can only be done by Mods, so if anyone were to sue for something written in a feature, 230 wouldn't apply.
Although, doesn't this miss the forest for the trees?

Say somebody does a heckin' murderino and says "I was told to by this Youtube channel." Don't you have to prove the speech provided by the channel goes beyond first amendment protections and becomes incitement to violence?

So then, isn't it even further removed to try to classify what the algorithm does as "speech" and then prove that its vague suggestion to watch various other videos somehow also constitutes incitement?

Or is this not a speech issue?
 
Please stay alive Null so you can give me more of those special secret trophy stickers.
 
  • Feels
Reactions: Marc
Back