Modern Web Woes - I'm mad at the internet

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
had this happen to me, i was looking for a docking station for my laptop that added a disk drive because my laptop didnt have one by default, and i just got a bunch of cunts telling me that discs were obsolete in current year while contributing nothing to the conversation
What the fuck is your problem you stupid nigger?

Everyone has been using USB floppy drives for decades now. You don't need one integrated into your docking station, just use some double-sided tape to attack your floppy drive on the side.
 
@Overly Serious

Some private sites that require a private session use a server/client log-in as you mentioned. Youtube's media is public-facing and does not require a login to download. As are most streaming services. In VLC you can go to CTRL + N or File > Open Media and go to the network tab and paste in a URL. You may still run into the segmentation problem you mentioned with this method. VLC will usually request the whole file with YouTube. There's a firefox plugin that will put a button over media files on websites that you can click to automatically open it in VLC if you have a shit connection and want to buffer it slowly. This is great for streaming websites with garbage upload bandwidth.

I want to point out that if you're simply pirating media, you can stream things using sequential downloadson torrents. This works in qbittorent or within most modern torrenting software. In qbittorent, while downloading, you can right click and hit "Download in Sequence", then open it in VLC, and after it downloads to about 1-2% you can start watching it while it is downloading. It will download the segments in order instead of requesting the first available segments first. It's the same as streaming without the faggotry.
Thanks. I'm still hoping there's some registry setting or something I can tweak that just says "amount.of.seconds.to.buffer=5" somewhere. But I had no idea about an extension that just lets you redirect video output to VLC. That even handles cookies and DRM? I'll have a look and give whatever I find a go.

Much appreciated!
 
  • Like
Reactions: Overly Serious
@Overly Serious
I did a bit of testing with the app, it's not as powerful as simply going to Inspect, Network, and grabbing the file that's streaming/downloading as it streams, and pasting that into VLC. This tends to work a lot better and will actually pull the full file on youtube and a few of the pirating websites eye brows.
 
Looks like I'm shadowbanned on Youtube comments across the entire website. I'm not really sure why. Didn't post anything any spicier than any of the other people all around me.maybe because I use ublock?
 
Last edited:
How does that work and how is it different from a regular ban?
one tells you you're banned, the other "gaslights" you thinking everything is ok but you're screaming into the void.
even worse when you're a content creator (iirc shadowbanned videos/accounts don't even show up in searches, unless you know where to look you're fucked).
 
In old SF, robots talk like robots and they can easily be portrayed as being incapable of error.

In reality, AI seems to talk like a hipster, and makes mistakes way too much. Especially in art.


In old SF: Safeguarding humanity from AI. Picking up a gun and fighting killer robots. Or being some techwizard hacking killer robots.

In reality: "Safeguarding humanity from AI". Making computers as woke as possible.
 
We need an archive site that is able to archive websites that use captchas or that have login requirements. I don't see why (especially with ai being a thing now) that archive sites cannot scrape pages that are behind such anti spam or security methods.

If I can log into facebook with a sock account for the purpose of getting screencaps or scraping page data then an archive server should be automated to do the same thing. As for captchas, an ai model could be trained to OCR text, or deconstruct image puzzles to solve the captcha. Hell there is even ai that can use social engineering to solve captchas.

Of course I can't be fucked to make this happen myself. It's better for me to bitch and complain about it on this site where my suggestions will not be read by those who could make such changes a reality.
 
In old SF: Safeguarding humanity from AI. Picking up a gun and fighting killer robots. Or being some techwizard hacking killer robots.

In reality: "Safeguarding humanity from AI". Making computers as woke as possible.
In SF, robots have some level of awareness and can innovate and adapt. "AI" just repeats back what it hears and what its training has told it is most likely to be the right response, like a furby on steroids, or a redditor.
 
We need an archive site that is able to archive websites that use captchas or that have login requirements. I don't see why (especially with ai being a thing now) that archive sites cannot scrape pages that are behind such anti spam or security methods.

If I can log into facebook with a sock account for the purpose of getting screencaps or scraping page data then an archive server should be automated to do the same thing. As for captchas, an ai model could be trained to OCR text, or deconstruct image puzzles to solve the captcha. Hell there is even ai that can use social engineering to solve captchas.

Of course I can't be fucked to make this happen myself. It's better for me to bitch and complain about it on this site where my suggestions will not be read by those who could make such changes a reality.
I tried to write something like this once a decade ago. You have no idea what a headache this is, especially considering how hard it is to even implement a captcha to reach your backend stack.

With human input it's very simple, but when you automate it you run into a lot or request_attempts that are captcha-centric. I'm sure people have been able to do it, just look at /pol/. I'm personally baffled and if anyone can give insight or has a git or codebase article on doing so I'd appreciate it.

I could probably do this but it'd be a massive project and I don't have a lot of time.
 
  • Like
Reactions: gmax alcremie
I tried to write something like this once a decade ago. You have no idea what a headache this is, especially considering how hard it is to even implement a captcha to reach your backend stack.

With human input it's very simple, but when you automate it you run into a lot or request_attempts that are captcha-centric. I'm sure people have been able to do it, just look at /pol/. I'm personally baffled and if anyone can give insight or has a git or codebase article on doing so I'd appreciate it.

I could probably do this but it'd be a massive project and I don't have a lot of time.
Fair enough, you make a good point. As for suggestions I'm not an experienced coder. I know mostly web related languages (HTML, CSS, MySQL and jQuery) just barely started dabbling in Python and have gotten lazier in the learning process since chatgpt writes most of my scripts now. All I have to do is debug and figure out how to piece it all together.

With that being said my thinking was (in the case for Twitter) that a bot account be created to meet the member requirement for viewing Twitter posts. But I'm sure there may be a limitation on how many queries can happen simultaneously on a single account. The next possible I thought of was using Twitters API, pulling the post text and media and then reconstructing the captured tweet(s) in the style of the site itself. But I imagine the act of reconstructing could be scrutinized for potential abuse since users would have to trust the custom script reconstructing the tweet rather than copying it as is.

So yes you are correct, an absolute pain in the ass. My hope was that someone could figure something out considering many people (not just us) like journalists, investigators, law enforcement, researchers and historians are also seeking to archive content from sites like Twitter.

Whoever manages to figure it out will certainly be swimming in theoretical geek pussy.
 
  • Like
Reactions: gmax alcremie
Back