Undefined Symbol
kiwifarms.net
- Joined
- Oct 25, 2020
This brings up a good discussion. Going by Matthew's words, the reason for the blocking at the moment appears to be that Kiwifarms (while under investigation/close watch over the last 2 weeks) has become an "immediate threat" due to some activity. As a result the action was taken as a preventative measure on harm to human life. In easier words, "response to this content or violation can not be dealt with fast enough by traditional legal matters, so we are preventing potential further harm by acting on it now before a judgement comes legally". This to me means they declared that something was the "final straw" or inciting matter in their decision.
Here is a post from Hacker News on the matter by eastdakota (Matthew Price? Someone correct me here if I'm wrong)
This obviously brings some questions:
1) What was the "immediate threat"? Was it a singular post here? Was it the fake bomb post? Did they declare the increase in doxing or attention to Keffals over the 2 weeks as one general "singular act" and that was the escalation to an immediate threat?
Currently, we aren't able to determine any singular threat that aren't the one's Null has mentioned as the potential items that caused this decision.
2) What is considered a "timely enough matter" in terms of moderating content that is both not allowed on the site and is a hazard to direct human harm?
If we assume the bomb post was the straw. The content was:
3) Is a website responsible for user posted content of a single person/group when it signals a direct possible harm to humans (I.E, bomb posting, public shooting threats, etc)
This is just another delve into Section 230 discussion, and we've been down this road 1,000s of times and back again on its protections.
4) Should a decision on blocking access to a site in cases where it poses an "immediate harm" be temporary or permanent? Is Cloudflare's decision temporary or permanent?
If blocking of said content is done as a protection due to the slower speed/lack of response on the legal matters of investigating it (or passing judgement in such a case) should the response to that be permanent or temporary removal until said matter has been properly investigated by proper means?
5) What determines whether content is a direct harm and risk to human life? Where do you draw the line?
Obviously bomb/shooting and terroristic threat fits the bill, but where do you set a line on the spectrum? Can doxing fit as a direct harm since it can open possibility to immediate harm to someone by an outside group? Can posting of unwanted/personal details that can bring someone to a mental level where they may partake in an illegal matter or suicide as a state of their mental level count?
6) To what level should content that is a possible risk to human life be believable before action is taken in the manner that Cloudflare claims?
Imagine I said I'm going to detonate Minecraft TNT in Minecraft New York. Is that a valid threat and should action be taken on it?
Here is a post from Hacker News on the matter by eastdakota (Matthew Price? Someone correct me here if I'm wrong)
eastdakota (https://news.ycombinator.com/item?id=32706673)
Reading over the comments I see everyone thinking this is about “free speech.” It is not. It’s about what in the US you’d call “due process” and in all the rest of the world you’d call “rule of law.”
Our decision today was that the risk created by the content could not be dealt with in a timely enough matter by the traditional rule of law systems.
That’s a failure of the rule of law on two dimensions: we shouldn’t be the ones making that call, and no one else who should was stepping up in spite of being aware of the threat.
Encourage you when these issues arise to think of them in the rule of law context, rather than free speech, in order to have a more robust conversation with frameworks that have an appeal and applicability across nearly every nation and government.
This obviously brings some questions:
1) What was the "immediate threat"? Was it a singular post here? Was it the fake bomb post? Did they declare the increase in doxing or attention to Keffals over the 2 weeks as one general "singular act" and that was the escalation to an immediate threat?
Currently, we aren't able to determine any singular threat that aren't the one's Null has mentioned as the potential items that caused this decision.
2) What is considered a "timely enough matter" in terms of moderating content that is both not allowed on the site and is a hazard to direct human harm?
If we assume the bomb post was the straw. The content was:
- Self-deleted by the poster promptly (<20 minutes)
- Disapproved by the community with negative reactions (this means that the platform itself was not cooperating with the act and it was one rogue user)
- Only existed, was only 'live", for a reasonable amount of time (<20 minutes)
- Was dealt with by the moderation staff promptly by banning the user for the post
3) Is a website responsible for user posted content of a single person/group when it signals a direct possible harm to humans (I.E, bomb posting, public shooting threats, etc)
This is just another delve into Section 230 discussion, and we've been down this road 1,000s of times and back again on its protections.
4) Should a decision on blocking access to a site in cases where it poses an "immediate harm" be temporary or permanent? Is Cloudflare's decision temporary or permanent?
If blocking of said content is done as a protection due to the slower speed/lack of response on the legal matters of investigating it (or passing judgement in such a case) should the response to that be permanent or temporary removal until said matter has been properly investigated by proper means?
5) What determines whether content is a direct harm and risk to human life? Where do you draw the line?
Obviously bomb/shooting and terroristic threat fits the bill, but where do you set a line on the spectrum? Can doxing fit as a direct harm since it can open possibility to immediate harm to someone by an outside group? Can posting of unwanted/personal details that can bring someone to a mental level where they may partake in an illegal matter or suicide as a state of their mental level count?
Keffal's gang was mentioning how Kiwifarms apparently "clogged up" suicide hotlines and thus prevented real people in possible critical mental health conditions from possibly receiving help. If this is true and the liar Keffals is actually right, would this count as "immediate risk"? Should it be dealt with?
6) To what level should content that is a possible risk to human life be believable before action is taken in the manner that Cloudflare claims?
Imagine I said I'm going to detonate Minecraft TNT in Minecraft New York. Is that a valid threat and should action be taken on it?
Last edited: