Lightweight Anonymous Distributed Shitposting - Distributed forum possiblities sperg zone

Yes. This is my #1 consideration, which is why I wrote:

The question is how to build and distribute that base blacklist. I'd happily see that sort of thing developed out in the open with the assistance of law enforcement, and anyone else who has content-id on the evil shit out there. Bake it as a client default, a bit like the SSL certificates that come with your browser, and add heavy warnings that you're on your own if you disable it.
The #2 consideration is actually having a community though. And this does require curation beyond merely filtering out kiddie porn. I don't think the concept of a distributed hash table of kiddie porn is enough to deal with this, strong processes for maintaining identities and permissions related to them are necessary.
 
  • Agree
Reactions: Kosher Dill
I'd happily see that sort of thing developed out in the open with the assistance of law enforcement, and anyone else who has content-id on the evil shit out there.
Problem is that content ID only works once you've already got the content so you can hash it. I'm not sure how much this would help unless you could build the content ID system into the client for outgoing posts, and have the network only accept new posts that have passed the client checks. I have no idea how feasible it is to require verified standalone client software.

We have to come up with some sort of way for a P2P network of user-submitted content to be trustable, at least to the extent that it won't get you vanned.
 
Assuming we're only sending text data: Ensure incoming data is valid ascii characters, check message size vs. content size and if it's not within an approximate value the data gets dropped.
 
My biggest worry is that without removing the filesharing component it's just too much of a liability. Also the "ownership" aspect is a little retarded. Not really so distributed is it then?
Not really much worse than Fediverse with mediaproxy.
 
  • Like
Reactions: TurdFondler
Problem is that content ID only works once you've already got the content so you can hash it. I'm not sure how much this would help unless you could build the content ID system into the client for outgoing posts, and have the network only accept new posts that have passed the client checks. I have no idea how feasible it is to require verified standalone client software.

We have to come up with some sort of way for a P2P network of user-submitted content to be trustable, at least to the extent that it won't get you vanned.
Yeah, and the reality is that even if you somehow get your hands on whatever table of MD5 hashes the FBI uses when scanning laptops for cheese pizza, nothing stops a user from tagging a byte or two onto the end of babyfuck.jpg to screw that up.

I gathered from the 9chan saga that CloudFlare may have some sort of 'better' means of doing this where they have some more abstract representation of the objectionable images that can be used to pick them up, whether the compression level has been changed on a JPEG, or the color depth on a GIF or PNG, or compression on a PNG, or all the numerous ways you can alter the hash on an image without substantially altering the image content. But a hypothetical distributed system will absolutely not have access to that, and if you were actually promoting free speech (as opposed to network centralization, the purpose of CloudFlare) you'd probably get vanned for having 'derivatives of child porn'.

I don't see how you prevent this getting out of hand without transplanting the forum ownership model, with a benevolent dictator having the final say and jannies carrying out his dictates, into a distributed format. Honestly, anything less, you might as well just follow the Freenet model of "oh, there's 20 gigs of CP on my PC? Well, I don't know anything about that, I only downloaded plans for making Sten guns, the network did the rest".
 
I don't see how you prevent this getting out of hand without transplanting the forum ownership model, with a benevolent dictator having the final say and jannies carrying out his dictates, into a distributed format.
That part I think isn't prohibitively difficult, it's just a matter of programming. It's just the intake of data where you have potentially dangerous material coming in that's the problem.

The best thing I can think of is still a sort of non-P2P staging area. Unvalidated incoming posts can be on someone else's server (IPFS, Pastebin, a .txt file on Geocities, whatever) until they get "promoted" up to the P2P network. But it'd be difficult to validate incoming posts at scale - you couldn't have mods manually approve each one as safe.
 
IRC seems like a good idea for staging until we can find another place.

Element is probably best atm since @CrunkLord420 has an instance already setup. But IRC should be a backup just in case he decides to ghost.
 
IRC seems like a good idea for staging until we can find another place.

Element is probably best atm since @CrunkLord420 has an instance already setup. But IRC should be a backup just in case he decides to ghost.
Reminder that Matrix rooms are a shared state and are not owned by any specific server. Servers only own users. If you have accounts on multiple servers, and give them both admin in a room, you will be able to continue that room without disruption if one of the servers goes offline. I've already gone ahead and done this for myself when Null announced his 230 intentions. For example I'd change the primary address of #TempleOS:matrix.kiwifarms.net to #TempleOS:glowers.club, it's already accessible at #TempleOS:glowers.club, it's just not the primary address.
 
  • Informative
Reactions: Safir and tehpope
Been reading this thread and would like to participate in whatever goes forward. Anyway, thoughts...


Regarding permissions and banning, you could leave it up to individual thread owners (if you make it, you obviously own it)

Regarding storage, even my phone has 128GB and I'm iShilling, my burner droid has a 512GB SD sitting around I'm pretty sure

I think thread-maker being the auto-mod for that thread is a very bad idea. Especially as personally I'd envisage a lot of political discussion taking place on such a forum. Most threads would be either someone's personal tyranny of viewpoint, or else forgotten about and unmoderated.

Whatever is created it is going to be living in a hostile environment. Which means, gatekeeping to survive. It's sad and it's a pain and it hinders good/fun/interesting people from just casually joining. BUT if the collapse of hobby, franchise and community under progressive entryists has taught me on thing in the West, it's that gatekeeping is a necessary evil. To that end, there has to be moderation above the level of thread. Whether that be benevolent dictator or some sort of default mod list that individuals can accept and maybe edit for their personal acceptance of modding by that user, I don't know. But I strongly believe it's necessary as a layer above the basic user.


Yes. This is my #1 consideration, which is why I wrote:

The question is how to build and distribute that base blacklist. I'd happily see that sort of thing developed out in the open with the assistance of law enforcement, and anyone else who has content-id on the evil shit out there. Bake it as a client default, a bit like the SSL certificates that come with your browser, and add heavy warnings that you're on your own if you disable it.

This is exactly the sort of thing I was going to post and whilst 311's reply is valid points about how this might be difficult, it's too important to give up on without meaningful exploration. Are there any services where you can run a hash of an image or video against their API and get back a response of if it's on their list? I can see how it might be difficult to offer such a service without it being exploitable by actual pedos but maybe it exists - it would certainly be useful. If it's incompatible with anonymity, e.g. someone has to register with them formally, then maybe someone could "take one for the team" and be a non-anonymous reader of the forum only, submitting any uploaded media to their service for verification. Something like that. Don't let the perfect be the enemy of the good and all that. I wouldn't want to be part of a forum where child-porn was floating around and that's just on a matter of personal loathing before you even get to the legal risk.

We shouldn't let the lack of a robust way of stopping this prevent a project getting off the ground, but we should definitely be working on a robust system. We have to face the reality that there are both actual pedos who will try to piggy back on any anonymous service to distribute it and also malicious people who will upload it as means of poisoning the service and getting it shut down. Also, I'm quite frankly open to passing on the details of anyone who uploaded that shit to law-enforcement. I don't see anything wrong with that.

Yeah, and the reality is that even if you somehow get your hands on whatever table of MD5 hashes the FBI uses when scanning laptops for cheese pizza, nothing stops a user from tagging a byte or two onto the end of babyfuck.jpg to screw that up.

I gathered from the 9chan saga that CloudFlare may have some sort of 'better' means of doing this where they have some more abstract representation of the objectionable images that can be used to pick them up, whether the compression level has been changed on a JPEG, or the color depth on a GIF or PNG, or compression on a PNG, or all the numerous ways you can alter the hash on an image without substantially altering the image content. But a hypothetical distributed system will absolutely not have access to that, and if you were actually promoting free speech (as opposed to network centralization, the purpose of CloudFlare) you'd probably get vanned for having 'derivatives of child porn'.

I don't see how you prevent this getting out of hand without transplanting the forum ownership model, with a benevolent dictator having the final say and jannies carrying out his dictates, into a distributed format. Honestly, anything less, you might as well just follow the Freenet model of "oh, there's 20 gigs of CP on my PC? Well, I don't know anything about that, I only downloaded plans for making Sten guns, the network did the rest".

All this makes sense but maybe we should do some practical looking into this. Reach out to services that might know and have resources like the Internet Watch Foundation. I can hear some recoil in horror at the idea but what I want from a community is just the ability to talk and share stuff with others without entryists joining and turning it into reddit or anon hordes making a new 4chan.

Which brings me onto another thought - someone above said they want it to have no user-persistence. Just be an anonymous place like 4chan. I really dislike that. It makes it into a notice board for practical purposes. Without persistent user identity you give up the ability to find others you get along with or build up a feeling for particular poster's leanings. Imagine a long discussion thread like the Spedisential Election without knowing "oh, this post is by 311... and this one is by HK" and processing accordingly. You cannot build up trust without user-persistence. You cannot establish a reputation without user-persistence. And you cannot gate-keep without user-persistence. I think in a hostile environment, gate-keeping is essential to success.

Frankly, if there's a way of doing it without compromising anonymity too much, I'd be strongly in favor of creating some sort of cost to signing up as a new member. Cost could be money or time but anything that prevents casually making a new account on a whim. Sites like Discord use SMS verification. That's probably out for us if we want anonymity though simply requiring email registration with an unused email address is a start. I personally like the idea of a very small financial cost paid by a crypto currency. But I don't know how anonymous they really are - someone more knowledgeable tell me? This small amount could either go to running costs for whoever is helping in a material way or for all I care be given to one randomly selected other user. Don't care - the goal is to simply force the community to be more selective and prevent anyone just wandering in and posting. Could even go some way to solving the child porn / terrorist threat problem.

I know these last comments will fly directly in the face of those that want some total anonymous, total freedom unstoppable de-centralised system. But that isn't possible and to me isn't desireable. I want a gate-kept community with persistent identity where I can have conversations with like-minded people. Or at least those who I can disagree with civilly. I just want it to be be a community that can survive in a hostile environment which anything not decentralised can now no longer do.

Anyway, my thoughts.
 
I personally like the idea of a very small financial cost paid by a crypto currency. But I don't know how anonymous they really are - someone more knowledgeable tell me?
Short answer: it varies for each coin, as well as where you get it. If you're using Bitcoin by doxing yourself to an exchange and trading USD for BTC, for example, you're pretty low on the anonymity scale.

How to validate new users is another interesting question though - this is another opportunity for denial of service because someone could try to register a trillion new users and crash the system or bog it down.
Can CAPTCHAs work in a decentralized system? How do you stop someone from denying service on your CAPTCHA provider? (I imagine they don't take kindly to a trillion bogus requests with your site key either)
How is this issue handled today on ordinary web servers, actually? There must be some limit on the number of users pending email verification, captchas waiting to be processed, etc.
 
My biggest worry is that without removing the filesharing component it's just too much of a liability. Also the "ownership" aspect is a little retarded. Not really so distributed is it then?
Distributed is not decentralized. A Distributed Denial of Service attack is still done by one actor.
 
If you want to be full anon (like chans), that's almost impossible (short of some crazy research projects, which really deserve another post). If you're fine being psuedonymous (like this forum), however, the technology has existed for more than a decade.

Look up FMS (Freenet Messaging System). It's a completely decentralized forum, with no spam or egregious CP problems. People decide for themselves what jannies they want to trust and how intense the filtering should be. Filesharing is handled by Freenet.

The moderation works basically like the rating system on Kiwi Farms. If I've upvoted a lot of posts by someone, and they publish a list of what posts they've upvoted, I'll ignore posts from identities that they have heavily downvoted, unless I've upvoted them or someone else I trust has. This continues recursively.

The downside is you need to fill in like ten captchas to register, it takes a few days, it's generally shitty and slow, but it works. All it needs is some polish, and you would have a shitposting center of last resort that can withstand literally anything.

How FMS works
If you want to try FMS, I recommend using jFMS and not the official frontend.
 
I've been doing some thinking about this and some of the problems people presented. I've thought of a few things. These are predicated on it being text only and thus file sizes are small. I've seen some people argue for a centralised moderation and some kind of account system, I have to say I'd strongly disagree with those ideas. IMO if there is no way to get it to work without them it's not worth doing. I also think there's a bit of variance in what's envisioned in terms of scope here. I pictured a single instance of such a program to be analogous to an individual thread and linear such as the thread we are posting on, as opposed to a full fledged forum in and of itself.

-There would have to be hard character limits for messages. It's the only way it can function. Maybe a few thousand characters.

-Each message has an id based based on a mathematically generated number based on the text of the post and the current date and UTC time to the second it was posted. I would imagine each character would be converted to it's ascii value and plugged into a formula resulting in a number(X). The message ID would be X followed by the date and time. So a message you posted on December 7th 2020 at 10:34:33 UTC would by X12072020103433. So, for example this post I am making would generate a number based on the characters within it followed and the date would be appended to the end resulting in a unique message ID.

-The date appended to the end of message IDs determines their order.

-Newly received messages are held in memory only you approve them and then they are written to text file and redistributed. This way no one person on the chain could censor a message, but if everyone on the chain ops not to share a message because it's junk or illegal propagation can be controlled to some degree. You don't have to download to memory or distribute messages that are threats against judges if you don't want to.

-You have a text file that keeps track of message IDs already received, this is written regardless if they are rejected or accepted and you do not receive any messages on the list again. This ensures that you don't have to download say, a message saying someone is going to shoot a judge in order to identify the offending message again, automatically rejecting it next time.

-A hard cap on the number of messages you receive from each peer each day. This is to cut down on spam. So say that limit is 10 messages. Peer 1 posts seven messages, shares them with peer 2 and peer 3 and logs off, peer 2 posts four messages peer 3 posts one message, peer 2 and 3 exchange messages and each has a set of 13 new messages. Peer 4 logs in and starts syncing with peers 2 and 3. Neither 2 nor 3 can send 4 the complete set of 13 messages but between them 4 can get all messages.

-You can blackball spammy ips, which in theory would result in spammy ips being blocked out of the network when everyone blocks them. If a peer does not block the spammy IP the hard cap on the number of messages that peer has the ability to send would help keep him from flooding the network with messages from ips others have blocked. You can in turn block that guy for being a retard.

-You can purge your files to keep your database lean, and the current date and UTC at the time you do so is stored in a file so that you do not accept any messages with a date and UTC earlier than the time you purged your files appended to the end of the message ID. The messages are store plain text so you can always archive them if you like before purging.

I dunno, maybe it still wouldn't be workable. Just spitballing.
 
Last edited:
  • Thunk-Provoking
  • Disagree
Reactions: JSC and TurdFondler
-A hard cap on the number of messages you receive from each peer each day. This is to cut down on spam.
This won't help in the absence of an account system. Attackers will just hire a botnet to create a multitude of new peers (each with its own unique IP) that all spam up to the message limit. Operators of legitimate peers will have to spend all their time blocking them all, or risk getting the legitimate traffic drowned out by spam. Remember, you have to decide which posts not to recirculate under this model, since there's no centralized moderation. Everyone would have to be pruning out spam 24/7 or else the network would break down under attack.

Also, you raise another interesting point:
You can blackball spammy ips, which in theory would result in spammy ips being blocked out of the network
Do we even want to be able to know people's IPs? I would imagine you'd want to access a heavily-attacked system like this via at least a VPN, if not Tor and the proverbial seven proxies.
 
This won't help in the absence of an account system. Attackers will just hire a botnet to create a multitude of new peers (each with its own unique IP) that all spam up to the message limit. Operators of legitimate peers will have to spend all their time blocking them all, or risk getting the legitimate traffic drowned out by spam. Remember, you have to decide which posts not to recirculate under this model, since there's no centralized moderation. Everyone would have to be pruning out spam 24/7 or else the network would break down under attack.

Also, you raise another interesting point:

Do we even want to be able to know people's IPs? I would imagine you'd want to access a heavily-attacked system like this via at least a VPN, if not Tor and the proverbial seven proxies.
I feel like you're imagining a large singular network. I'm imagining thousands of small independent ones. Think of the pre-internet BBS, where there were thousands of BBS servers around the country. If there were 1,000 BBSes and you took down 1, there would still be 999 BBSes people could dial into. Now imagine if instead of dialing into someone's Commodore 64, you joined a small network. You create the open-source software, create some tutorials, put it out in the wild and let people create their own little small decentralized networks to discuss a myrayid of things from video games to lolcows to tv shows. I think that is far more feasible than a large decentralized global network everyone's on as the world wide web was originally intended to be. I'd actually argue that fact is what's causing the problems we're having with the web now in the first place.

When you talk about a botnet attack, is each node in the botnet creating a unique message every time? You have a system where the characters of each post are converted to to their ascii and plugged into a formula to come up with a unique number if done properly. If all the bots are just spamming the letter S, adding the first message that's just the letter S to your block list would block out the others. It seems like setting up a botnet to consistently change the message it's spamming people with for the sake of disrupting 50 people talking about Star Trek or some lolcow or some shit would be a lot of technical overhead for very little reward would it not?
 
Last edited:
  • Thunk-Provoking
Reactions: Robert Sanvagene
When you talk about a botnet attack, is each node in the botnet creating a unique message every time?
Why wouldn't they be? It's incredibly easy. It could even be just 5000 randomly picked characters every time.
You can say "Then we'll require the message to be in something like English and throw it out if not", or whatever. But this arms race already played out decades ago in email. Spoiler alert: the spammers won.

Besides which, having a hash blacklist is another point for denial of service if you run it on unfiltered input. What if I intentionally try to make your blacklist 100GB long by submitting endless garbage? What if everyone's trying to share gigantic blacklists? (Or, for that matter, what if I create a malicious blacklist that blocks all the good content?)

Also:
You have a system where the characters of each post are converted to to their ascii and plugged into a formula to come up with a unique number if done properly.
You can just say a "hash", everyone will know what you're talking about.
 
Last edited:
  • Like
Reactions: JSC
I've seen some people argue for a centralised moderation and some kind of account system, I have to say I'd strongly disagree with those ideas.
Would a decentralized moderation and some kind of account system work?
If not, I hate to break it to you, it's impossible. The current state-of-the-art is that you get a temporary account, use it for a few days, and then exchange it for a new account as long as it's in good standing.
Newly received messages are held in memory only you approve them and then they are written to text file and redistributed.
What if you get a trillion messages?
-A hard cap on the number of messages you receive from each peer each day. This is to cut down on spam. So say that limit is 10 messages. Peer 1 posts seven messages, shares them with peer 2 and peer 3 and logs off, peer 2 posts four messages peer 3 posts one message, peer 2 and 3 exchange messages and each has a set of 13 new messages. Peer 4 logs in and starts syncing with peers 2 and 3. Neither 2 nor 3 can send 4 the complete set of 13 messages but between them 4 can get all messages.
This just makes spam worse. If I can post enough junk messsages that all ten of those messages are junk (with high probability), the forum is totally clogged.
I feel like you're imagining a large singular network. I'm imagining thousands of small independent ones. Think of the pre-internet BBS, where there were thousands of BBS servers around the country. If there were 1,000 BBSes and you took down 1, there would still be 999 BBSes people could dial into. Now imagine if instead of dialing into someone's Commodore 64, you joined a small network. You create the open-source software, create some tutorials, put it out in the wild and let people create their own little small decentralized networks to discuss a myrayid of things from video games to lolcows to tv shows. I think that is far more feasible than a large decentralized global network everyone's on as the world wide web was originally intended to be. I'd actually argue that fact is what's causing the problems we're having with the web now in the first place.
This works, but you still need to fix the spam problem. If you want an old-school system that's proven to scale, why not copy Usenet instead?

You have a bunch of "servers". A client connects to a server to post or read. The server looks at the IP, generates a captcha, checks Bayesian spam filter, etc, and if it looks fine, accepts the post. The other servers then share the messages with each other, applying rudimentary spam filtering.

If you post thousands of spam messages to one server, it will (hopefully) block you. If it doesn't, other servers will start to blacklist it.

If you run your own server to spam, other servers will start to blacklist it.

This system exists, and it's called NNTPchan, but because anonymous forums are so hard, nobody uses it AFAIK.
Why wouldn't they be? It's incredibly easy. It could even be just 5000 randomly picked characters every time.
You can say "Then we'll require the message to be in something like English and throw it out if not", or whatever. But this arms race already played out decades ago in email. Spoiler alert: the spammers won.

Besides which, having a hash blacklist is another point for denial of service if you run it on unfiltered input. What if I intentionally try to make your blacklist 100GB long by submitting endless garbage? What if everyone's trying to share gigantic blacklists? (Or, for that matter, what if I create a malicious blacklist that blocks all the good content?)

Also:

You can just say a "hash", everyone will know what you're talking about.
Keep in mind that email spam has to sell you something, whereas this is just to disrupt. Email spam has to get across the point "go here to collect a million dollars of Nigerian money" somehow, even to the stupidest possible reader. This can just be total garbage.

How would your spam filter deal with someone just posting randomly selected posts from the archive of another forum? Those would be in perfect English, they just wouldn't have any cohesion between them.

EDIT: If you want even more old-school, a basic mailing list would work fine. You sign up with your email, you send an email to the list, everyone else on the list gets a copy. If your mail server or domain gets taken down, you still have the email list - just tear it down and start a new one, and nobody will notice.
 
Short answer: it varies for each coin, as well as where you get it. If you're using Bitcoin by doxing yourself to an exchange and trading USD for BTC, for example, you're pretty low on the anonymity scale.

How to validate new users is another interesting question though - this is another opportunity for denial of service because someone could try to register a trillion new users and crash the system or bog it down.
Can CAPTCHAs work in a decentralized system? How do you stop someone from denying service on your CAPTCHA provider? (I imagine they don't take kindly to a trillion bogus requests with your site key either)
How is this issue handled today on ordinary web servers, actually? There must be some limit on the number of users pending email verification, captchas waiting to be processed, etc.

Thanks for that. I appreciate anonymity of cryto-currency is both a very complex field and one in which most answers are variants on "what do you call good enough?" rather than absolutes. So let me ask a more specific question: is it viable that you could have reasonable anonymity in a model where a small crypto payment is made to join? If so, that solves our captcha issue entirely and also goes a long way to retard frivolous sign-ups. And I know the concept flies in the face of what some people envisage as acceptable but for all practical purposes it would work well assuming the technical side was viable. I'm going to reiterate - I believe any such platform must have gatekeeping in order to be viable as a community rather than just some random link site that can be taken off the board by any motivated person with a modicum of technical knowledge the moment posts start appearing there that somebody doesn't like.

Gatekeeping, persistent identity. They go together. If we're talking about creating a fallback for KiwiFarms then we need them. If we're talking about creating a fallback for 4chan, then we don't. But I'm not interested in a fallback for 4chan.


Separately, I put my words into action and did some preliminary research into how we could handle the childporn et al. issues. So there are services that let you submit media / hashes of media and get a response back on if it's flagged as child porn. One such service is Microsoft's PhotoDNA which is free for qualified users. I need to look into exactly what you need to do to qualify and I think there's a strong possibility someone would have to take one for the team and be a non-anonymous person to get the details. But that person could likely be a bridge to their service proxying the media hashes back and forth to check they were clean. Will know more when I've done more research. But we both need to keep this stuff out to protect the site from being shutdown and simply because I do not want to inadvertently create a platform for pedos.

No word on their site whether it can also block Furry content, but we can hope.
 
Back