Lightweight Anonymous Distributed Shitposting - Distributed forum possiblities sperg zone

I got onto usenet finally, ended up using Firebird. Found a free NNTP server fairly easily and kiwis were on it.

How does one rehost f.h.kiwis?
 
For the love of God, they won't even let you see the source code from the supposed clearnet mirror without getting on Freenet. I'm not feeling much trust in a system that says "Just trust us and run our code if you want to read our code".
Freenet is free and open source software available under GPLv2+. The source code is on GitHub.

FMS is a separate piece of software from Freenet. You do not need FMS to use Freenet, but you need Freenet to use FMS.

The source code for FMS is officially published at its Freenet site, but here's another clearnet mirror: https://github.com/SeekingFor/FMS

Attached is the latest FMS source tree I just downloaded off Freenet. This is not very useful without a working Freenet install though. You can get the Freenet source code without installing Freenet.
 

Attachments

  • Like
Reactions: Kosher Dill
That's true, but neither do you have any knowledge of it. Nobody, not even known pedophiles, has been sentenced over Freenet "storing" illegal materials.

The datastore is always encrypted. There is no way to tell what material you're storing, unless you already have its key. Nobody has been sentenced or charged with any crimes for running a Freenet node. It's like Tor in that regard, where it's completely legal to run an exit node, but illegal to deal in child pornography.
I know that's the Freenet line, but it's a definite point of vulnerability if Freenet was to become a common platform for discussions of a type that governments want to suppress. You might not 'know' exactly what CP you're hosting... but you know that you probably are hosting some.

It would be interesting to see how the Freenet project would respond if someone made a version that blacklists hashes for known CP. I suspect some substantial fraction of the community would use it as a resource to decide what to download next.
 
  • Like
Reactions: Overly Serious
As I write this, the current total posts on KF is 7,583,872. So just the text wouldn't be an outrageous amount in this internet age, even if we assume they're a few KB apiece. Images and attachments, I have no clue, but I think that's going to be an issue with any decentralized system.
I was guessing that the storage needs would be the bottleneck too. Using a blockchain type thing makes the most sense; solves most of the taking problems in decentralization. You could use key singing on top of the public-private keys to do permissions. For example have a key in the chain that if a user's key is signed with that user is an admin, so it really just turns into distributed key-trust like for ssl certificates.

Hypothetically lets say that you have a number of peers logging in and out of the blockchain thought the day. Every time a peer need to reconnect after being out of sync is going to be a very expensive process. The peer is going to need the new (updated) chain downloaded and will be able to start (mining) validating new posts after the syncing process concludes. Now imagine that these peers are all laptop-ers who shut down when not on-line to save battery life. You're going to have a peer connect in for about an hour, make some posts, and then leave again. Ideally the chain would be downloaded in full before the peer makes a post. That way the peer can start mining their own post and others while they're on-line. You're going to have every peer coping the whole forum many times a day every time they connect in. Miners work best when you treat them like servers: they stay connected to network and power mostly all the time. When you do take them off and later put them back on the network you can wait for them to get count up with the chain again without interruption.

Switching to this new kind of system would be much more bandwidth intensive; any peer would have to take new posts to mine one, keep in sync with the rest of the forum chain, and have a copy of the chain ready to be sent to fellow peers upon request. If you had a very unpopular site with a few weirdos who never turn off their computers, such a system would probably be very workable because there would be little to mine and all the peers would be always be available on the network. Having little to mine once all the peers are at consensus this state would last a long time giving all the peers time to idle or power off and come back on line and re-download an unchanged blockchain.

I'm really not impressed with text compression. Google's brotli is not as good as bzip2 is some instances and gzip or deflate wouldn't get things small enough. Bzip2 is also slow. It's also not a great selling point if you can only have text, so it would need to be a generalized compression. Otherwise, I don't thing the project would get adoption enough to make maintaining it worth whiled. I'd like to get some statistics from different forums about what data storage they need. This would included static content, php scripts, and database. It might be over kill not just to take the database size, but I think it would be best to get an accurate picture of the current storage demands involved for the average forum. If we could get a few different forum admins to run du on the right directories would be a good jumping-off point before making anything concrete. I'd also be interested in data usage as a function of popularity (posts being made). F(post) -> size would give a good idea of what bandwidth miners will need to grow the blockchain.

I have some reservations about how this would work because ISPs limit upload from home users; sort-of kills any chance of a true P2P network. I think a blockchain forum wouldn't change a lot of political problems with running a forum. Best case scenario I see is making running one server over multiple boxes much easier to set up.
 
  • Informative
Reactions: Overly Serious
So what it looks like is that in FMS, each user is running their own SQLite database instance which maintains their model of the forum as they see it. They're using this as a full relational database including queries, indexes and triggers. The DB is serialized to a file locally. They do protect against SQL injection by using prepared statements, so the obvious "Johnny Drop Tables" attack won't work at least.

Your node will have one thread running at all times to merge incoming messages into your local view of the forum (and process identity requests etc), as well as up to 3 optional threads:
1. An HTTP server to view the forum as HTML
2. An NNTP server to talk to Usenet clients
3. A Freenet listener that actually does the work on the network.

One "griefable" thing that I notice is that while you can tell the server to ignore incoming requests from untrusted nodes, those ignored requests don't get removed from the various "incoming" queues until the daily housekeeping thread runs. So an attacker could just pile up tons of requests and massive messages in your inbox. If there's any other sort of garbage collection / reclaim process, I haven't found it.
In fact, all the "web of trust" features are part of FMS, not Freenet itself, so if I'm reading this right, you will unavoidably end up with random, unwanted messages - and since the input queues are actually database tables, they will get serialized to your own hard disk too. The SQLite DB is in synchronous mode, so serialization happens after every commit.

There are also no limits on the size of incoming data, it'll happily go ahead and download any amount of data it's told to. There's some validation on the dimensions (height and width) of images, but that's just a bit of post-processing validation.

fms-src-0.3.82\src\translationsetup.cpp is a good place to look to see all the configurable features in human-readable form, so you can get an idea of just what the system is up to.

@stares at error messages - when I was mentioning blockchain before, I was thinking of using one of the big existing blockchains rather than one we create. Otherwise we'd just get 51% attacked. At any rate, I'm not sure the idea is really workable at all, I'd be interested in hearing other ideas for establishing a small bit of decentralized shared state.
 
Last edited:
"Retarding frivolous sign-ups" is among the worst things you could do. It's extremely difficult to get people to donate crypto to this forum, imagine how much harder it'd be for a forum nobody uses.

A much better model is to be allowed in if at least one other user vouches for you. It's then left up to the individual users what criteria to use:
  • CAPTCHA
  • Email verification
  • Phone verification
  • I know them IRL
  • Account on a different forum
  • etc
If you vouch for people who spam, you lose the ability to vouch for others.


There is a much simpler solution: ban anyone who posts CP, ban the user who vouched for them, and don't store the media posted by banned users.

I strongly encourage you to check out FMS - it solved this problem more than 10 years ago.
You say retarding frivolous sign-ups is amongst the worst things we could do and then suggest we do something that does exactly that - i.e. require an existing user to vouch for you. I think you misunderstood me. I've proposed using a small crypto payment as the way to do it - a payment that is trivial in the single case but obviously becomes non-trivial when done en masse. You've proposed vouching for someone. I don't care what method is used so long as it's there and works. We can't have this be viable if account creation is either not required to post or it is trivial to do en masse.

I've thought about the "must be sponsored by a member in good standing" approach myself. I'm not wholly against it but whilst it works for Gentleman's Clubs and the Masons, it's a lot trickier for an anonymous online community. I don't know anyone on KF outside of KF. At least I don't know that I do. By its nature we all take precautions to separate our identity here from real life and even other online identities. I needed an email to sign up but that's not public information that someone can say "I'll email Overly Serious and ask them to sponsor me." CAPTCHA and Email verification aren't gate-keeping and aren't matching your model of en existing user vouching for them either, though you listed them under that.

Regards CP, yes - your solution of someone saying: "Hey - that's CP, lets ban it" is simpler. Doesn't mean it's better. Using a service like PhotoDNA lets us pro-actively catch it before thousands of people have the misfortune of downloading it and seeing it, it can catch it being exchanged in private messages, it also sends a strong message to anyone who tries it that we actively root it out. And all we have to do is submit the media to the Third Party before allowing it to be posted. We don't know what number of active mods we will have in any solution we build or how on the ball they will be. I don't think any of them could be as fast or as effective as a simple REST call with the media and getting a response saying: "Bad - do not accept, place user that sent it into a suspension pending evaluation" no matter how willing and numerous mods were.

I will checkout FMS - thanks for that.

But speaking more seriously, if you're looking to set up a web of trust for a decentralized KF successor, the best thing to do is establish it before KF goes down so current users can more easily get themselves on board. If you wait until after Null pulls the plug, it's going to be considerably harder to know who's who, given the fairly anonymous nature of the userbase.

That's why I started a thread over in General for people to post their GPG public keys + fingerprints. I fully agree this will be much easier if we do it before KF goes (if it does) but Null said that he'll post a torrent of the full archive so posting something that lets you prove your identity post-Farms (like the GPG key + fingerprint) is at least useful.
 
Last edited:
I got onto usenet finally, ended up using Firebird. Found a free NNTP server fairly easily and kiwis were on it.
I saw that. Looks like there's three of us that have posted to f.h.kiwis now.

How does one rehost f.h.kiwis?
If you're really keen, you could try setting up your own NNTP (news) server. You'd want to set it up in such a way that only only group on it is f.h.kiwis, otherwise it could get very messy very quickly.

That said, there are news servers out there that should automatically add new newsgroups, thus propagating f.h.kiwis across usenet. I've done a bit of testing and it looks like other news servers have picked up f.h.kiwis.

Granted, it doesn't solve the problem of spam (many, many newsgroups have ground to a halt under the weight of spam), but it might at least buy some time between KF going dark (if/when that happens) and a more permanent solution being put into place.

Edit: yep, f.h.kiwis has propagated onto at least three free NNTP servers (AIOE, Neodome, Netfront). Time to put the based back into text-based newsgroups.
 
Last edited:
it can catch it being exchanged in private messages
There most likely won't be any such thing as "private messages" in a distributed system, since there would be nowhere private to hide them. Anyone wanting to talk privately will have to exchange encryption keys themselves or something.
 
I know that's the Freenet line, but it's a definite point of vulnerability if Freenet was to become a common platform for discussions of a type that governments want to suppress. You might not 'know' exactly what CP you're hosting... but you know that you probably are hosting some.
"You are probably hosting some" isn't sufficient for a crime. If it were, the Tor relay operators would've been charged with something by now, as would Google Drive.
It would be interesting to see how the Freenet project would respond if someone made a version that blacklists hashes for known CP. I suspect some substantial fraction of the community would use it as a resource to decide what to download next.
Known by whom?
How are you going to verify it's actually CP?
Are you gonna download a list of hashes some rando claims are CP and look at them to "verify"?
So what it looks like is that in FMS, each user is running their own SQLite database instance which maintains their model of the forum as they see it. They're using this as a full relational database including queries, indexes and triggers. The DB is serialized to a file locally. They do protect against SQL injection by using prepared statements, so the obvious "Johnny Drop Tables" attack won't work at least.

Your node will have one thread running at all times to merge incoming messages into your local view of the forum (and process identity requests etc), as well as up to 3 optional threads:
1. An HTTP server to view the forum as HTML
2. An NNTP server to talk to Usenet clients
3. A Freenet listener that actually does the work on the network.
Correct.
One "griefable" thing that I notice is that while you can tell the server to ignore incoming requests from untrusted nodes, those ignored requests don't get removed from the various "incoming" queues until the daily housekeeping thread runs. So an attacker could just pile up tons of requests and massive messages in your inbox. If there's any other sort of garbage collection / reclaim process, I haven't found it.
In fact, all the "web of trust" features are part of FMS, not Freenet itself, so if I'm reading this right, you will unavoidably end up with random, unwanted messages - and since the input queues are actually database tables, they will get serialized to your own hard disk too. The SQLite DB is in synchronous mode, so serialization happens after every commit.
It's not correct to say you have an inbox to which other nodes send messages. It's more correct to say that other nodes have outboxes from which you retrieve messages. There's no way to push data to a node. FMS will always ignore untrusted nodes, because it by definition doesn't know about them.

Freenet has a "web of trust" plugin, but that's a separate thing.
There are also no limits on the size of incoming data, it'll happily go ahead and download any amount of data it's told to. There's some validation on the dimensions (height and width) of images, but that's just a bit of post-processing validation.
That's not correct. The maximum size of a message (in XML format) is a hardcoded 1 MB, for example (see src/freenet/messagerequester.cpp:624), and attachments are just links to files. There's a limit to how many messages you will request from a peer per day.
fms-src-0.3.82\src\translationsetup.cpp is a good place to look to see all the configurable features in human-readable form, so you can get an idea of just what the system is up to.
Most of the business logic is in the classes in the root, or in src/freenet.
@stares at error messages - when I was mentioning blockchain before, I was thinking of using one of the big existing blockchains rather than one we create. Otherwise we'd just get 51% attacked. At any rate, I'm not sure the idea is really workable at all, I'd be interested in hearing other ideas for establishing a small bit of decentralized shared state.
Blockchains don't really help here - you're not looking to mathematically validate a ledger is consistent, just keep a log of posts.
 
Last edited:
You say retarding frivolous sign-ups is amongst the worst things we could do and then suggest we do something that does exactly that - i.e. require an existing user to vouch for you. I think you misunderstood me. I've proposed using a small crypto payment as the way to do it - a payment that is trivial in the single case but obviously becomes non-trivial when done en masse. You've proposed vouching for someone. I don't care what method is used so long as it's there and works. We can't have this be viable if account creation is either not required to post or it is trivial to do en masse.

I've thought about the "must be sponsored by a member in good standing" approach myself. I'm not wholly against it but whilst it works for Gentleman's Clubs and the Masons, it's a lot trickier for an anonymous online community. I don't know anyone on KF outside of KF. At least I don't know that I do. By its nature we all take precautions to separate our identity here from real life and even other online identities. I needed an email to sign up but that's not public information that someone can say "I'll email Overly Serious and ask them to sponsor me." CAPTCHA and Email verification aren't gate-keeping and aren't matching your model of en existing user vouching for them either, though you listed them under that.
The important thing is that with a vouching system, you'll decentralize registration. If "the system" has a common CAPTCHA system, then things will break if it turns out to be vulnerable. If Bob runs a "fill out a captcha, get vouched" service and that service breaks, then the only impact is that Bob's service will be broken until he fixes it.
Regards CP, yes - your solution of someone saying: "Hey - that's CP, lets ban it" is simpler. Doesn't mean it's better. Using a service like PhotoDNA lets us pro-actively catch it before thousands of people have the misfortune of downloading it and seeing it, it can catch it being exchanged in private messages,it also sends a strong message to anyone who tries it that we actively root it out. And all we have to do is submit the media to the Third Party before allowing it to be posted. We don't know what number of active mods we will have in any solution we build or how on the ball they will be. I don't think any of them could be as fast or as effective as a simple REST call with the media and getting a response saying: "Bad - do not accept, place user that sent it into a suspension pending evaluation" no matter how willing and numerous mods were.
Are you suggesting all private messages be sent through Microsoft? How are you even going to do this in a decentralized system?

Integrating some system like that in a decentralized, WoT-based system would be possible. Scan all new posts for CP using whatever method you want, and negrate into oblivion anyone who posts an image that turns out positive. You can have anyone doing that, and depending on how accurate their results are, people can choose whether to trust their negrates for themselves.
There most likely won't be any such thing as "private messages" in a distributed system, since there would be nowhere private to hide them. Anyone wanting to talk privately will have to exchange encryption keys themselves or something.
Schematically, since you have everyone's public keys, you can just encrypt a post to their public key and post it somewhere public. There's also nicer ways to do it that don't leak metadata.
 
That's not correct. The maximum size of a message (in XML format) is a hardcoded 1 MB, for example (see src/freenet/messagerequester.cpp:624),
You can ask for a maximum of 1MB to be returned, but I see nothing in there enforcing that. What's to stop the following:
Me: ClientGet, 1MB limit please.
Other node: PersistentGet, yes, I will send you 800KB (lying)
Other node: AllData, DataLength = a zillion.

If you look at the AllData handlers like in fms-src-0.3.82\src\freenet\messagerequester.cpp (and similar for other types) it never actually compares back against any of the limits that were originally requested.

There may well be some other limits in Freenet itself, but I've only looked at the FMS code.

FMS will always ignore untrusted nodes, because it by definition doesn't know about them.
Well, the code definitely anticipates untrusted nodes ending up in tblIdentity somehow, but I haven't yet convinced myself exactly how this works yet. It does filter them out from polling, but I wonder about these nulls.
Code:
if(m_localtrustoverrides==false)
    {
        sql+="AND (tblIdentity.LocalMessageTrust IS NULL OR tblIdentity.LocalMessageTrust>=(SELECT OptionValue FROM tblOption WHERE Option='MinLocalMessageTrust')) ";
        sql+="AND (tblIdentity.PeerMessageTrust IS NULL OR tblIdentity.PeerMessageTrust>=(SELECT OptionValue FROM tblOption WHERE Option='MinPeerMessageTrust')) ";
    }
 
You can ask for a maximum of 1MB to be returned, but I see nothing in there enforcing that. What's to stop the following:
Me: ClientGet, 1MB limit please.
Other node: PersistentGet, yes, I will send you 800KB (lying)
Other node: AllData, DataLength = a zillion.
The ClientGet is made to your local Freenet node, not to a remote node. FMS never directly interacts with the Internet.
If you look at the AllData handlers like in fms-src-0.3.82\src\freenet\messagerequester.cpp (and similar for other types) it never actually compares back against any of the limits that were originally requested.
The limits are passed to Freenet in the ClientGet message.
There may well be some other limits in Freenet itself, but I've only looked at the FMS code.
Yes, see https://github.com/freenet/wiki/wiki/FCPv2-ClientGet
Well, the code definitely anticipates untrusted nodes ending up in tblIdentity somehow, but I haven't yet convinced myself exactly how this works yet. It does filter them out from polling, but I wonder about these nulls.
Code:
if(m_localtrustoverrides==false)
    {
        sql+="AND (tblIdentity.LocalMessageTrust IS NULL OR tblIdentity.LocalMessageTrust>=(SELECT OptionValue FROM tblOption WHERE Option='MinLocalMessageTrust')) ";
        sql+="AND (tblIdentity.PeerMessageTrust IS NULL OR tblIdentity.PeerMessageTrust>=(SELECT OptionValue FROM tblOption WHERE Option='MinPeerMessageTrust')) ";
    }
If trust is null, this means I don't have an opinion on them. If I give someone message trust but not trust list trust, for instance, I'll give them a null trust list trust. Likewise for all those not in my list, although those are obviously not in the database.

This is different from giving them zero trust, which is an explicit statement that they should be banned that will lower their reputation. Null trust doesn't have any bearing on reputation calculations, since it's ignored by default.
 
"You are probably hosting some" isn't sufficient for a crime. If it were, the Tor relay operators would've been charged with something by now, as would Google Drive.
Tor relays aren't 'hosting' anything. They simply relay traffic. FreeNet nodes retain and further distribute material that not just yourself, but others connected to you, request.

If you're running a Tor relay, you can fairly say that you're just relaying traffic. If you're running a FreeNet node, you're retaining and distributing a bunch of CP. You just can't say for sure whether it's babyfuck1.jpg or babyfuck2.jpg without a list of hashes of CP (which I'm sure law enforcement would build up if FreeNet ever really took off).
Known by whom?
How are you going to verify it's actually CP?
Are you gonna download a list of hashes some rando claims are CP and look at them to "verify"?
A nation state actor looking to attack FreeNet could build up a list legitimately. Anyone else would presumably go through Freesites that puport to host it, and build up a list of whatever images are present (with image display turned off, hopefully).

That could then be used to build a forensic tool to locally scan the computers of people identified for persecution, and/or, in a forked version of FreeNet that would not distribute known child pornography, to attack the main FreeNet project.
 
If you're running a FreeNet node, you're retaining and distributing a bunch of CP. You just can't say for sure whether it's babyfuck1.jpg or babyfuck2.jpg without a list of hashes of CP (which I'm sure law enforcement would build up if FreeNet ever really took off).
Speaking of such things, I found this article from about 5 years ago.
Seems like the network glows rather brightly, but at the time this was written, the focus of enforcement action was on the ultimate requesters of CP.
 
Hashes aren't going to find diddly in an encrypted store on a random drive on a random node in a big network. Not that I'm a fan of hosting some pedo's wank pics, but I can't think of a way to filter content in a double-blind setup like this, and running a node wouldn't make me fear the feds kicking my door down some night. If someone's actively hosting a kiddie-diddling site on the network, yeah, I could absolutely see several legal organizations going to the trouble of tracking them down, but a person just running an open node where the data's all encrypted and they don't have a key to view it seems pretty safe to me.
 
Every network devised in history has been used to spread CP. If that's the standard you want to follow, you need to go all Ted Kaczynski and live in a shack in the desert while writing manifestos. Any communication method that conceals the identity of users is inevitably going to attract criminals of all stripes, including sex criminals. There's no way to avoid that without compromising the security of the network.
 
  • Thunk-Provoking
Reactions: Robert Sanvagene
Back