Archival Tools - How to archive anything.

Ghost Archive changed it's title thing
View attachment 8216808
It's a special Christmas logo:
ga_christmas.png

7.png

8.png

L / A / A
The logo is changed whenever December is coming up (At the end of November). They do it every year:
9.png

L
 
I am looking to archive many discord servers text messages (not the media) on a large scale. How would I go about archiving Discord servers? I wish to do this for archival reasons. I do not know of any other discord indexers.
There are many communities that hosts a discord server that tend to delete years of messages and I wish to have a solution to that problem. The communities that I deal with aren't on Discord's discoverable servers, which means I must have a Discord account to access those servers. These servers have members from 1,000 to 10,000,000.
 
I am looking to archive many discord servers text messages (not the media) on a large scale. How would I go about archiving Discord servers? I wish to do this for archival reasons. I do not know of any other discord indexers.
There are many communities that hosts a discord server that tend to delete years of messages and I wish to have a solution to that problem. The communities that I deal with aren't on Discord's discoverable servers, which means I must have a Discord account to access those servers. These servers have members from 1,000 to 10,000,000.
Use DiscordChatExporterGUI if you need something really simple to use, with a GUI. It exports channels into nice, readable HTML files (or plain TXT for that matter). The downside: it's not too good if you want to archive an entire server - you'll have to do it channel by channel.

If you have to archive an entire server at once, use the command line, and download the CLI version of DiscordChatExporter , it's way more efficient for bulk or full server exports.

if you want to archive it publicly, use Kemono, but it only archives channels, and you have to give them your token.
 
Use DiscordChatExporterGUI if you need something really simple to use, with a GUI. It exports channels into nice, readable HTML files (or plain TXT for that matter). The downside: it's not too good if you want to archive an entire server - you'll have to do it channel by channel.

If you have to archive an entire server at once, use the command line, and download the CLI version of DiscordChatExporter , it's way more efficient for bulk or full server exports.

if you want to archive it publicly, use Kemono, but it only archives channels, and you have to give them your token.
you can rearchive kemono exports using Megalodon, every other service craps itself.
 
What's the easiest way to record zoom without using OBS, any command line options?
 
If you have to archive an entire server at once, use the command line, and download the CLI version of DiscordChatExporter , it's way more efficient for bulk or full server exports.
I tried this out and I'm being bloated by the reactions. There's an important channel which message having at least 15k reactions yet software is saving the userid, time, name, nickname. I am being restricted by the disk write speed with useless information.
Is there a way to gather data without having the unnecessary reaction information. All I need is the text message, date of the message, and the userid.

Also I have managed to compress a 580 MB json file into a 12.5 MB 7zip file. It might be possible to save a large discord server with years of history under the 200 MB limit with my proposed limitations.
 
Ghost Archive can archive webassembly games
 
Last edited:
Wanted to reference my post to the Glass thread in here too, in case anyone finds value in this scenario. Glass is pretty unique in that, in addition to typical retard sperging that many lolcows engage in, Glass will also write literal fucking essays in YouTube Community Posts to seethe at people who talk negatively about him. He is also known for deleting community posts, sometimes within hours of posting them, which results in regular monitoring needed to archive his community posts before he eventually deletes them.

I've written a script making use of Playwright to monitor his YouTube Community Posts for any updates, and save the link plus a full page screenshot. Initially testing seems to work nicely in capturing a typical long essay type community post:
Glass_community_post_2025-12-06_16-48-53.png

I was also hoping that I could automate a request to archive.md to automatically archive the page, but there is now a CAPTCHA being used so this will still need to be done manually.

Python:
'''
Lolcow YouTube Community Post Archiver


This script requires Playwright to be installed,
can be done through following commands

1) pip install playwright
2) python -m playwright install
'''

import time
import os
from pathlib import Path
from playwright.sync_api import sync_playwright

# ---------------- SETTINGS ----------------

LOLCOW_NAME = "Glass"
CHANNEL_NAME = "@GlassWindow11"

NEW_POST_COUNT = 0
CHANNEL_POSTS_URL = f"https://www.youtube.com/{CHANNEL_NAME}/posts"
SCREENSHOT_PREFIX = f"{LOLCOW_NAME}_community_post_"
LAST_SEEN_FILE = f"{LOLCOW_NAME}_last_seen.txt"
CHECK_INTERVAL_SECONDS = 300  # 5 minutes

# ---------------- HELPERS ----------------

def read_last_seen():
    if os.path.exists(LAST_SEEN_FILE):
        return Path(LAST_SEEN_FILE).read_text().strip()
    return None


def save_last_seen(url):
    Path(LAST_SEEN_FILE).write_text(url)


def timestamp():
    return time.strftime("%Y-%m-%d_%H-%M-%S")


def check_for_new_post(playwright):
    global NEW_POST_COUNT
    browser = playwright.chromium.launch(headless=True)

    # Set browser window size
    context = browser.new_context(
        viewport={"width": 1920, "height": 1080}
    )

    page = context.new_page()

    try:
        print("\nChecking community posts...")

        # Navigate to Community Posts
        page.goto(CHANNEL_POSTS_URL, wait_until="networkidle")
        time.sleep(5)

        # Grab newest post link
        post_link = page.locator("a[href*='post/']").first

        if not post_link.count():
            print("No posts located.")
            return

        latest_url = post_link.get_attribute("href")

        if not latest_url.startswith("http"):
            latest_url = "https://www.youtube.com" + latest_url

        print("Latest post found:", latest_url)

        # Compare to last seen
        last_seen = read_last_seen()

        if latest_url == last_seen:
            print("No new posts detected.")
            return

        print("NEW POST DETECTED!")
        NEW_POST_COUNT += 1

        save_last_seen(latest_url)

        # Open post page
        page.goto(latest_url, wait_until="networkidle")
        time.sleep(4)

        # Expand text if needed
        try:
            read_more = page.locator(
                "tp-yt-paper-button:has-text('Read more'), "
                "yt-formatted-string:has-text('Read more')"
            )
            if read_more.count() > 0:
                read_more.first.click()
                time.sleep(2)
                print("Large community post detected. Expanding the full text of the community post.")
        except:
            pass

        # Screenshot
        screenshot_name = f"{SCREENSHOT_PREFIX}{timestamp()}.png"
        page.screenshot(path=screenshot_name, full_page=True)
        print("Screenshot saved:", screenshot_name)

    finally:
        context.close()
        browser.close()


def main():
    print("\n")
    print("__________________________________________________")
    print("Starting YouTube Community Post Monitor")
    print(f"for {LOLCOW_NAME} on channel {CHANNEL_NAME}")
    print("--------------------------------------------------")

    with sync_playwright() as playwright:
        while True:
            try:
                check_for_new_post(playwright)
            except Exception as e:
                print("ERROR:", e)

            if NEW_POST_COUNT > 0:
                print("\n")
                print("**************************************************")
                print(f"THERE ARE {NEW_POST_COUNT} NEW COMMUNITY POSTS FOR YOU TO ARCHIVE")
                print("**************************************************")

            print(f"Sleeping for {CHECK_INTERVAL_SECONDS // 60} minutes before checking again...\n")
            time.sleep(CHECK_INTERVAL_SECONDS)


if __name__ == "__main__":
    main()

LOLCOW_NAME/CHANNEL_NAME can be substituted appropriately to monitor another lolcow's channel. You will also need to install Playwright, which you can do through the Python package manager:
Code:
1) pip install playwright
2) python -m playwright install


Below is sample Terminal output when running the script:
Code:
PS C:\Users\gizmog\Documents\KF\Archiving\Glass_Community_Post_Automation> python .\glass_community_post_archiver.py


__________________________________________________
Starting continuous YouTube Community Post Monitor
for Glass on channel @GlassWindow11
--------------------------------------------------

Checking community posts...
Latest post found: https://www.youtube.com/post/Ugkx7wAb2TYGMCcQ56Mr25I-doXkN17t6hKp
NEW POST DETECTED!
Large community post detected. Expanding the full text of the community post.
Screenshot saved: Glass_community_post_2025-12-06_16-48-53.png


**************************************************
THERE ARE 1 NEW COMMUNITY POSTS FOR YOU TO ARCHIVE
**************************************************
Sleeping for 5 minutes before checking again...
 
If a lolcow has privated or unlisted videos on their YouTube channel, is there a way to find them after the fact?
 

"Blessed are those who plant trees under whose shade they will never sit"


I'd like to give a primer on different tools I use for different archival scenarios, to show some options for handling such scenarios, as well as to show how easy it is to archive content with the tools available. It is everyone's responsibility to help with content archival to avoid amazing content becoming lost media. There are many ways that content can get lost, whether it is a lolcow that DFE, content host takes it down or goes out of business, etc, so let's do our part to make sure that such content will stick around for years to come.

I'm going to go through a few different common archival scenarios, and approaches I use to archive such content, to provide examples you can make use of (or improve upon).

Archiving Photos/Screenshots

This is likely to be the most common way you archive content to the thread. Whether it is taking a screenshot of a livestream, X post, YouTube community post, content comment section, etc, they will all generally follow a similar format.

Firstly, do not rely on 3rd party sites to host your images (Imgur, etc). Simply attach the photo to your post and then thumbnail your photos. You should then also include an original link and any archival link to associate to the photo. For example, here is how I archive a screenshot of a YouTube community post:


As you can see, I include (L|A) to represent Link and Archive of the YouTube community post. This is helpful because:
  1. We can see the original link to view the media in it's original location, and observe any changes over time.
  2. The archive provides another source of truth that further confirms the original link as the source.
  3. If the original link becomes unavailable, we have URL used for it, which can be useful for linking to other content or information available.
  4. Archival sites can index such content making it more readily available for those looking for it in the future to find it.

Archiving a Web Page

As an extension of the previous topic, sometimes your photo/screenshot may need to cover a much larger section of (or an entire) web page.

Firstly, I'd like to show how you can get very large screenshots, such as those requiring scrolling to see the full context. For example, Glass loves to write essays in his community posts, such that you need to scroll quite a bit to see the full post. Take for example the following YouTube community post of his:
Glass made this Community Post, complaining about UrghBla and mentioning a bunch of VRChat YouTubers and inviting them and anybody else who wants to "take care of this guy before he hurts more innocent people" by linking his Discord username.
View attachment 8286911( Link / Archive )

The lazy approach would be to just take multiple screenshots of the post and lay out the parts in the post. However, this isn't necessary, as browsers by default now offer full page screenshot options. Here is how that is done:

1) Expand the full community post (or other content you want to screenshot), right-click the page, and select "Screenshot".

full_page_screenshot1.png

2) An overlay at the top of the page will different options, but you will select "Capture full page" here.

full_page_screenshot2.png

3) A second overlay will be generated that shows the full page being captured, which you can then save to a file by pressing the Save icon.

full_page_screenshot3.png

4) If you want to crop part of the web page to only cover the relevant bits, just open the photo in Paint and crop appropriately.

full_page_screenshot4.png


Now you have screenshot(s) to add to your post for the given web page, but you still need to link to the page as well as archive it separately. I suggest making use archival sites such as Archive.md and/or Ghostarchive to archive web pages. These sites are relatively straightforward, such that you provide the URL to archive, and it will either provide you the already archived link (if someone else has already archived it first) or will go through the process of archiving it and then generate a link for you.

As a tip, when Archive.md works on archiving a link, it will use a URL like so to indicate that the archival is in progress (/wip/ = work-in-progress): https://archive.md/wip/Yz9O7

Then, when it's done, it will use the link without the /wip/ (https://archive.md/Yz9O7). So if you are working on a post, what I do is kick off the archiving, put the finalized link into the post (while archiving is ongoing), and continue on with my post writing. Then, before submitting the post, I make sure that the archive completed successfully (or edit the link appropriately to a new archive attempt).

What if the Web Page Archival is Incomplete/Broken?

You may come across a scenario where a web page archival is generated by Archive.md/Ghostarchive tools, but it's either incomplete or content is warped/malformed/etc. This can typically happen on sites where there is heavy client-side processing (JavaScript, etc) logic that doesn't work well with the archival tool's web page crawlers. In such cases, if there are URL parameters/arguments that can be given to the content page to change how the content is rendered, then that can be an option to help mitigate such behavior.

Another approach is utilizing an equivalent site that pulls the same content down for archival. For example, X posts tend to be annoying to archive directly because it usually only archives the post directly instead of any parent/child posts in a chain. In such cases, I will make use of Nitter to archive those X posts. For example, I make use of this approach for the FuturisticHub thread (Link vs. Archive):
EDIT: Speaking of deranged freak, this X post combination helps illustrate the dark depths of Brian:
xFuturisticHub_Dec1_WTF.png
(L1|A1)
(L2|A2)

Archiving On-Demand Video Content

Archiving on-demand videos is another common form of archival, whether it involves YouTube, Kick, Twitch, Instagram, or other platforms. Thankfully, this can all be archived using the yt-dlp utility. Even though the name implies that it is specifically a YouTube video downloader, this tool can actually archive videos from many platforms. I've personally used it for YouTube, Kick, Twitch, and Instagram.

The tool offers many parameters and settings for customizing it's behavior, but you will generally just use the following 95% of the time:
Code:
 .\yt-dlp.exe -t mp4 [VIDEO URL]

The main change will be what video URL you specify. Below are examples for different platforms:

Code:
.\yt-dlp.exe -t mp4 https://kick.com/pwrworld/videos/5151e56e-5f50-4cce-9197-572f5fddfc9f
 
.\yt-dlp.exe -t mp4 https://www.youtube.com/watch?v=SSaQyrM10Pc
 
.\yt-dlp.exe -t mp4 https://www.twitch.tv/videos/2649706684

.\yt-dlp.exe -t mp4 https://www.instagram.com/p/DR2JLsMjTfi/


This utility with the above parameters will generate the highest quality mp4 file for the given video content link. Some content is excessively heavy large in size as a result of using a high resolution or frame rate, neither of which are (generally) essential to archiving. Next, let's cover how to manage the size of the content for upload to your post.

Note: If it's a YouTube video and depending on the length of the video content, you should also consider archiving the video to PreserveTube as well too.

How Do I Handle Large Video Sizes? How Can I Generate Clips From a Longer Video?

Video size is influenced by factors such as resolution, length, and bit rate. When locally archiving the video to your post, you (generally) don't need a 4k resolution 60fps quality video to get the point of the archive across. So in such cases, it's worth considering re-encoding the video to reduce such settings. Handbrake is my go to tool for this as it's very simply to do such work with it. In addition to re-encoding the video, you can also use this as a method to clip or segment your video based on the length. For example, PWR will run 6 - 8+ hour long streams, so even if you reduce the quality (while still being a usefully clear archive), it will still be multiple GBs in size. As such, you can split up a stream into multiple parts. I usually do this at the 1 hour mark for each stream (ex. 6 hour stream = 6 separate 1 hour videos) to keep each video at ~600MB in size. Below are sample steps for breaking up a video in timed segments, which can be done both to help archive the full video to your post as well as to generate clips from particular moments in a video:

1) Open Handbrake and drag the video into the window

2) Select "Very Fast 720p30" in Preset dropdown. This will generate a relatively small video while still retaining decent quality, reducing file size from shrinking higher resolution uploads (ex. 1080p+) as well as reducing high frame rate (60fps) , which is generally not essential for archiving Glass content:

handbrake1.png


3) Figure out how much you need to divide the video to keep average file size under ~800mb each. For example, if the video is 1GB in size, I would just split it in half (i.e. two parts) so that each part is ~500mb.

4) Fill in the timing numbers into Range seconds field for each part, then press "Add to Queue":

handbrake2.png

handbrake3.png

5) After generating all the video parts in the queue, you can press Queue button to see the parts and confirm settings all match:

handbrake4.png

handbrake5.png

6) Press Start Queue and Handbrake will generate the videos in the defined destination folder

Can I Proactively Monitor For a Livestream and Archive it?

Yes! yt-dlp has a --wait-for-video x parameter, which can be used to check every x seconds for a livestream to be available, and then it'll start recording once it detects it:
Code:
.\yt-dlp.exe -t mp4 --wait-for-video 30 https://kick.com/pwrworld

ytarchive is also an easy utility to use as well that will also work on a loop (i.e. start checking again after current livestream ends), but it only works for YouTube videos.

I've looked into options to help me automate this type of proactive monitoring, as I am monitoring multiple lolcows at a time, which led to the creation of...

Lolcow Archiver (1.0)

For automated archiving, I hated having to use a bunch of different tools (for YouTube vs. Kick vs. ...), so I've been working on a general "Lolcow Archiver" script that will use yt-dlp to check for new YouTube/Twitch/Kick streams and start archiving them, as well as archiving YouTube community posts using Playwright (since lolcows like Glass write essays in their YouTube community posts). It's a work in progress, but I've been using it for some time now and it seems to be generally working:
Python:
'''
Lolcow Archiver 1.0

This script can archive the following:

1) Kick/Twitch/YouTube Live Streams
2) YouTube Community Posts


This script requires yt-dlp to be installed for
video generation, in the same directory as this script.
You can get all the options for downloading yt-dlp
from it's Github page:

https://github.com/yt-dlp/yt-dlp/wiki/Installation


This script requires Playwright to be installed for
community post snapshots, can be installed through
the following commands

1) pip install playwright
2) python -m playwright install
'''

import subprocess
import sys
import time
import os
from pathlib import Path
from playwright.sync_api import sync_playwright


# ---------------- SETTINGS ----------------

NEW_POST_COUNT = 0
NEW_STREAM_COUNT = 0


# ---------------- HELPERS ----------------

def read_last_seen(last_seen_file):
    if os.path.exists(last_seen_file):
        return Path(last_seen_file).read_text().strip()
    return None


def save_last_seen(last_seen_file, url):
    Path(last_seen_file).write_text(url)


def timestamp():
    return time.strftime("%Y-%m-%d_%H-%M-%S")


def usage():
    print("Usage: python lolcow_archiver.py Kick/Twitch/YouTube_URL Lolcow_Name [YouTube Community Post URL]")
    print("\nFor Example:")
    print("python lolcow_archiver.py https://www.youtube.com/@GlassWindow11/live Glass https://www.youtube.com/@GlassWindow11/posts")
    print("python lolcow_archiver.py https://kick.com/clavicular/ Clavicular")
    print("python lolcow_archiver.py https://kick.com/pwrworld/ PWR https://www.youtube.com/@PWRWorldOfficial/posts")
    print("\n\n")

def check_for_livestream(live_url):
    global NEW_STREAM_COUNT
    script_dir = Path(__file__).resolve().parent
    ytdlp_path = script_dir / "yt-dlp.exe"

    if not ytdlp_path.exists():
        print(time.strftime("%H:%M:%S", time.localtime()), f": Error: yt-dlp.exe not found at {ytdlp_path}")
        sys.exit(1)

    command = [
        str(ytdlp_path),
        "-o",
        "%(title).50B [%(id)s].%(ext)s", # Limit filename length if stream title too long
        "-t",
        "mp4",
        live_url,
    ]

    print(time.strftime("%H:%M:%S", time.localtime()), ": Launching yt-dlp...")
    try:
        result = subprocess.run(command, check=False)
        if result.returncode == 0:
            NEW_STREAM_COUNT += 1
    except Exception as e:
        print(time.strftime("%H:%M:%S", time.localtime()), f": Error while running yt-dlp: {e}")


def check_for_new_post(lolcow_name, channel_posts_url, playwright):
    global NEW_POST_COUNT
    browser = playwright.chromium.launch(headless=True)
    screenshot_prefix = f"{lolcow_name}_community_post_"
    last_seen_file = f"{lolcow_name}_last_seen.txt"


    # Set browser window size
    context = browser.new_context(
        viewport={"width": 1920, "height": 1080}
    )

    page = context.new_page()

    try:
        print("\n")
        print(time.strftime("%H:%M:%S", time.localtime()), f": Checking community posts for {lolcow_name}...")

        # Navigate to Community Posts
        page.goto(channel_posts_url, wait_until="networkidle")
        time.sleep(5)

        # Grab newest post link
        post_link = page.locator("a[href*='post/']").first

        if not post_link.count():
            print(time.strftime("%H:%M:%S", time.localtime()), ": No posts located.")
            return

        latest_url = post_link.get_attribute("href")

        if not latest_url.startswith("http"):
            latest_url = "https://www.youtube.com" + latest_url

        print(time.strftime("%H:%M:%S", time.localtime()), ": Latest post found:", latest_url)

        # Compare to last seen
        last_seen = read_last_seen(last_seen_file)

        if latest_url == last_seen:
            print(time.strftime("%H:%M:%S", time.localtime()), ": No new posts detected.")
            return

        print(time.strftime("%H:%M:%S", time.localtime()), ": NEW POST DETECTED!")
        NEW_POST_COUNT += 1

        save_last_seen(last_seen_file, latest_url)

        # Open post page
        page.goto(latest_url, wait_until="networkidle")
        time.sleep(4)

        # Expand text if needed
        try:
            read_more = page.locator(
                "tp-yt-paper-button:has-text('Read more'), "
                "yt-formatted-string:has-text('Read more')"
            )
            if read_more.count() > 0:
                read_more.first.click()
                time.sleep(2)
                print(time.strftime("%H:%M:%S", time.localtime()), ": Large community post detected. Expanding the full text of the community post.")
        except:
            pass

        # Screenshot
        screenshot_name = f"{screenshot_prefix}{timestamp()}.png"
        page.screenshot(path=screenshot_name, full_page=True)
        print(time.strftime("%H:%M:%S", time.localtime()), ": Screenshot saved:", screenshot_name)

    finally:
        context.close()
        browser.close()


# ---------------- MAIN ----------------

def main():
    capture_community_posts = False

    if len(sys.argv) < 3:
        usage()
        sys.exit(1)

    if len(sys.argv) == 4:
        capture_community_posts = True
        channel_posts_url = sys.argv[3]

    live_url = sys.argv[1]
    lolcow_name = sys.argv[2]
    time.strftime("%H:%M:%S", time.localtime())
    print("\n")
    print("__________________________________________________")
    print(f"Starting Lolcow Archiver for {lolcow_name}")
    print(f"Checking for livestream at {live_url}")
    if capture_community_posts:
        print (f"Checking for YouTube Community Posts at {channel_posts_url}")
    print("--------------------------------------------------")

    with sync_playwright() as playwright:
        while True:
            try:
                if capture_community_posts:
                    check_for_new_post(lolcow_name, channel_posts_url, playwright)
                check_for_livestream(live_url)

            except Exception as e:
                print("ERROR:", e)

            if capture_community_posts and NEW_POST_COUNT > 0:
                print("\n")
                print("**************************************************")
                print(f"THERE ARE {NEW_POST_COUNT} NEW COMMUNITY POSTS FOR YOU TO ARCHIVE")
                print("**************************************************")

            if NEW_STREAM_COUNT > 0:
                print("\n")
                print("**************************************************")
                print(f"THERE ARE {NEW_STREAM_COUNT} NEW STREAMS FOR YOU TO ARCHIVE")
                print("**************************************************")

            print("\n")
            print(time.strftime("%H:%M:%S", time.localtime()), ": Sleeping for 30 seconds before checking again...\n")
            time.sleep(30)


if __name__ == "__main__":
    main()

With this script, I just run separate instances of this script for different lolcows/channels I'm actively monitoring, and can pop into the Terminal windows to see if there are any updates I need to be aware of:
script_window.png

If the optional parameter is set, it will automatically generate web page screenshots of the lolcow's YouTube community post. This is useful for scenarios where they write extremely long community posts as well as potentially delete them soon afterwards. By capturing the community posts in this way, it avoids losing a seething community post that they end up deleting minutes later (before you had a chance to notice and manually archive). Here is an example of a generated community post archive:
Glass_community_post_2025-12-18_12-29-06.png

Here is an example of a notification reported when a new community post has been detected and archived:
script_window_communitypost_notification.png

Here is an example of a stream being detected and yt-dlp kicking off to start downloading the stream files:
script_window_stream_downloading.png

Note that the *_last_seen.txt used to track latest community posts for the given lolcow can also be used to confirm URL for the post and then be used to archive it manually to archive.md/Ghostarchive.


Also note the following limitations:
  1. This script does not automatically generate archive.md/Ghostarchive archives for the YouTube community posts. I wanted to do this initially, but with captchas being added, I'm not going to bother working around that.
  2. Since it's a full page screenshot of the YouTube community post, you will want to manually crop it appropriately before attaching it to your post. I had faced issues with posts being truncated when initially trying to handle this through the script, so I scrapped that for now. You could probably get away with that with very small posts, but any posts that require the "Read more" option to be selected to expand it out I suspect is what's making it more challenging to automatically crop around the post object in the page.
  3. Video generated is always the highest quality possible mp4 file, which could generate very large files depending on resolution/fps used. You can tweak the yt-dlp arguments using the command variable within script if you want to set specific limits.
  4. The script only checks the same directory the script is in for yt-dlp.exe, if you want to use a different path, you can tweak the ytdlp_path variable within script to change path as desired.
  5. If community post option is set, and you haven't ran it for this particular lolcow or not before the current latest post was made, the script needs to be run once to capture the latest existing community post for that lolcow. You can then kill the script after it's done so, and then run it again to avoid the new community post detected message showing up for that previously existing post.
 
This script does not automatically generate archive.md/Ghostarchive archives for the YouTube community posts. I wanted to do this initially, but with captchas being added, I'm not going to bother working around that.
One thing that would be incredibly useful would be some kind of generic captcha forwarding server/mobile app that would let you do this for your own automated processes by sending a notification to let you fill out a captcha on your phone, at least for the simple Google/Cloudflare Captchas where you just have to select a few boxes, in the same way that the services that do captcha solving via AI or infinite Indians do but self-hosted/with your own labour/compute. Is anyone aware of anything like this?
 
Back
Top Bottom