It's not that google and/or the feds doesn't know about this, they are probably involved in this. Youtube runs ai bots to sniff out 5 seconds of copyright material, but they can't sniff out CSAM/child porn? Get the fuck out of here.
PhotoDNA is such a fucking joke. Nobody in the image processing community takes it legitimately seriously. To be clear to everyone pointing out how Youtube has good copyright detection, but shit CSAM detection, PhotoDNA and the copyright detectors are independent.
PhotoDNA is a "fuzzy hash". It turns the image into a series of numbers, each number representing the sum of an operation with sections of 6x6 in the image. You then adjust how many of these numbers need to be similar to any given CSAM in the NCMEC database. The default settings by the NCMEC boast a "1 in a trillion false positive rate."
This article sums up everything wrong with this thing. You can skip the nerd babble, but the amount of flaws this thing has makes me surprised that we hit the most verified detections with each passing year. It's incredibly trivial to bypass, and I've seen 'p spammers literally just change the image format to something obscure since it's been programmed for only normalfag file formats.
You can also reverse the hashes. (
archive, sorry for it being archive.org but nothing else works)
It's not enough to get a "clear" view, but you can generally see what's happening.
It's easy to circumvent automated file scans and impossible to manually verify everything that gets uploaded to your file hosting site is legal. Mediafire is being stellar in their reaction to this. They can only know that there's child porn on their servers that didn't get caught by the automatic filtering if someone tells them about it, and so far they have proven that they care. The same cannot be said about YouTube.
An alternative that I find that is used a lot by spammers on Discord, especially Reginmyre, bust and friends, etc, is that they upload the material onto a temporary file storage website or a temporary domain, and about 5-10 minutes before the expiry date, they quickly spam the material on a thread or in the DM's of a specific target. By the time it gets reported to the NCMEC, the material is gone.
I suggest to include a verbal description of the photo in the report. A lot of cops have seen the same CSAM video's again and again, and if a random civilian tells them about a video they've seen on a pedo's hard drive, they can guarantee the link led to illegal material. It's one of the two options suggested to mandatory reporters and website owners, with the other being saving the image, and sending the MD5 hash, then quickly deleting it.
I can't stress this enough,
do not do the second option. The second option is only for website owners and ISP's. If you see CSAM accidentally, and you have the mental strength to do it, describe it in the report. You are not Shakespeare or a physics professor, you don't need to describe every little detail
, just describe what is "basically happening."
If there is one thing I've learned from 'p spammers, it's to
never, ever, click on a catbox link or a .eu, .de
, or .ru link that looks like gibberish.