- Joined
- Nov 17, 2024
It's a special Christmas logo:Ghost Archive changed it's title thing
View attachment 8216808
L / A / A
The logo is changed whenever December is coming up (At the end of November). They do it every year:
L
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
It's a special Christmas logo:Ghost Archive changed it's title thing
View attachment 8216808
it's not that big of an image, plus I want people to see what it is (I previewed both thumbnail and image)posting image and not thumbnail.
got my hopes up it was a new feature the state Ghost Archive kind of is in is sad (still works at the moment though)It's a special Christmas logo:
Ghost Archive changed it's title thing
View attachment 8216808
Yeah, I'm thinking based.It's a special Christmas logo:
View attachment 8216821
View attachment 8216822
View attachment 8216823
L / A / A
The logo is changed whenever December is coming up (At the end of November). They do it every year:
View attachment 8216831
L

Use DiscordChatExporterGUI if you need something really simple to use, with a GUI. It exports channels into nice, readable HTML files (or plain TXT for that matter). The downside: it's not too good if you want to archive an entire server - you'll have to do it channel by channel.I am looking to archive many discord servers text messages (not the media) on a large scale. How would I go about archiving Discord servers? I wish to do this for archival reasons. I do not know of any other discord indexers.
There are many communities that hosts a discord server that tend to delete years of messages and I wish to have a solution to that problem. The communities that I deal with aren't on Discord's discoverable servers, which means I must have a Discord account to access those servers. These servers have members from 1,000 to 10,000,000.
you can rearchive kemono exports using Megalodon, every other service craps itself.Use DiscordChatExporterGUI if you need something really simple to use, with a GUI. It exports channels into nice, readable HTML files (or plain TXT for that matter). The downside: it's not too good if you want to archive an entire server - you'll have to do it channel by channel.
If you have to archive an entire server at once, use the command line, and download the CLI version of DiscordChatExporter , it's way more efficient for bulk or full server exports.
if you want to archive it publicly, use Kemono, but it only archives channels, and you have to give them your token.
ffmpeg can record from the screen.What's the easiest way to record zoom without using OBS, any command line options?
I tried this out and I'm being bloated by the reactions. There's an important channel which message having at least 15k reactions yet software is saving the userid, time, name, nickname. I am being restricted by the disk write speed with useless information.If you have to archive an entire server at once, use the command line, and download the CLI version of DiscordChatExporter , it's way more efficient for bulk or full server exports.

'''
Lolcow YouTube Community Post Archiver
This script requires Playwright to be installed,
can be done through following commands
1) pip install playwright
2) python -m playwright install
'''
import time
import os
from pathlib import Path
from playwright.sync_api import sync_playwright
# ---------------- SETTINGS ----------------
LOLCOW_NAME = "Glass"
CHANNEL_NAME = "@GlassWindow11"
NEW_POST_COUNT = 0
CHANNEL_POSTS_URL = f"https://www.youtube.com/{CHANNEL_NAME}/posts"
SCREENSHOT_PREFIX = f"{LOLCOW_NAME}_community_post_"
LAST_SEEN_FILE = f"{LOLCOW_NAME}_last_seen.txt"
CHECK_INTERVAL_SECONDS = 300 # 5 minutes
# ---------------- HELPERS ----------------
def read_last_seen():
if os.path.exists(LAST_SEEN_FILE):
return Path(LAST_SEEN_FILE).read_text().strip()
return None
def save_last_seen(url):
Path(LAST_SEEN_FILE).write_text(url)
def timestamp():
return time.strftime("%Y-%m-%d_%H-%M-%S")
def check_for_new_post(playwright):
global NEW_POST_COUNT
browser = playwright.chromium.launch(headless=True)
# Set browser window size
context = browser.new_context(
viewport={"width": 1920, "height": 1080}
)
page = context.new_page()
try:
print("\nChecking community posts...")
# Navigate to Community Posts
page.goto(CHANNEL_POSTS_URL, wait_until="networkidle")
time.sleep(5)
# Grab newest post link
post_link = page.locator("a[href*='post/']").first
if not post_link.count():
print("No posts located.")
return
latest_url = post_link.get_attribute("href")
if not latest_url.startswith("http"):
latest_url = "https://www.youtube.com" + latest_url
print("Latest post found:", latest_url)
# Compare to last seen
last_seen = read_last_seen()
if latest_url == last_seen:
print("No new posts detected.")
return
print("NEW POST DETECTED!")
NEW_POST_COUNT += 1
save_last_seen(latest_url)
# Open post page
page.goto(latest_url, wait_until="networkidle")
time.sleep(4)
# Expand text if needed
try:
read_more = page.locator(
"tp-yt-paper-button:has-text('Read more'), "
"yt-formatted-string:has-text('Read more')"
)
if read_more.count() > 0:
read_more.first.click()
time.sleep(2)
print("Large community post detected. Expanding the full text of the community post.")
except:
pass
# Screenshot
screenshot_name = f"{SCREENSHOT_PREFIX}{timestamp()}.png"
page.screenshot(path=screenshot_name, full_page=True)
print("Screenshot saved:", screenshot_name)
finally:
context.close()
browser.close()
def main():
print("\n")
print("__________________________________________________")
print("Starting YouTube Community Post Monitor")
print(f"for {LOLCOW_NAME} on channel {CHANNEL_NAME}")
print("--------------------------------------------------")
with sync_playwright() as playwright:
while True:
try:
check_for_new_post(playwright)
except Exception as e:
print("ERROR:", e)
if NEW_POST_COUNT > 0:
print("\n")
print("**************************************************")
print(f"THERE ARE {NEW_POST_COUNT} NEW COMMUNITY POSTS FOR YOU TO ARCHIVE")
print("**************************************************")
print(f"Sleeping for {CHECK_INTERVAL_SECONDS // 60} minutes before checking again...\n")
time.sleep(CHECK_INTERVAL_SECONDS)
if __name__ == "__main__":
main()
1) pip install playwright
2) python -m playwright install
PS C:\Users\gizmog\Documents\KF\Archiving\Glass_Community_Post_Automation> python .\glass_community_post_archiver.py
__________________________________________________
Starting continuous YouTube Community Post Monitor
for Glass on channel @GlassWindow11
--------------------------------------------------
Checking community posts...
Latest post found: https://www.youtube.com/post/Ugkx7wAb2TYGMCcQ56Mr25I-doXkN17t6hKp
NEW POST DETECTED!
Large community post detected. Expanding the full text of the community post.
Screenshot saved: Glass_community_post_2025-12-06_16-48-53.png
**************************************************
THERE ARE 1 NEW COMMUNITY POSTS FOR YOU TO ARCHIVE
**************************************************
Sleeping for 5 minutes before checking again...
archive.today, there might be scrapers but I only know about web services, almost every alternative frontend for viewing Instagram is dead or behind Cloudflare.how to archive instagram
JDownloader will do it on the desktop.how to archive instagram
use this https://mattw.io/youtube-metadata/If a lolcow has privated or unlisted videos on their YouTube channel, is there a way to find them after the fact?
May have something to do with the widespread power outage in San Francisco PG&E outages in S.F. (Kiwi post)
Glass made this Community Post, complaining about UrghBla and mentioning a bunch of VRChat YouTubers and inviting them and anybody else who wants to "take care of this guy before he hurts more innocent people" by linking his Discord username.
View attachment 8286911( Link / Archive )




.\yt-dlp.exe -t mp4 [VIDEO URL]
.\yt-dlp.exe -t mp4 https://kick.com/pwrworld/videos/5151e56e-5f50-4cce-9197-572f5fddfc9f
.\yt-dlp.exe -t mp4 https://www.youtube.com/watch?v=SSaQyrM10Pc
.\yt-dlp.exe -t mp4 https://www.twitch.tv/videos/2649706684
.\yt-dlp.exe -t mp4 https://www.instagram.com/p/DR2JLsMjTfi/





.\yt-dlp.exe -t mp4 --wait-for-video 30 https://kick.com/pwrworld
'''
Lolcow Archiver 1.0
This script can archive the following:
1) Kick/Twitch/YouTube Live Streams
2) YouTube Community Posts
This script requires yt-dlp to be installed for
video generation, in the same directory as this script.
You can get all the options for downloading yt-dlp
from it's Github page:
https://github.com/yt-dlp/yt-dlp/wiki/Installation
This script requires Playwright to be installed for
community post snapshots, can be installed through
the following commands
1) pip install playwright
2) python -m playwright install
'''
import subprocess
import sys
import time
import os
from pathlib import Path
from playwright.sync_api import sync_playwright
# ---------------- SETTINGS ----------------
NEW_POST_COUNT = 0
NEW_STREAM_COUNT = 0
# ---------------- HELPERS ----------------
def read_last_seen(last_seen_file):
if os.path.exists(last_seen_file):
return Path(last_seen_file).read_text().strip()
return None
def save_last_seen(last_seen_file, url):
Path(last_seen_file).write_text(url)
def timestamp():
return time.strftime("%Y-%m-%d_%H-%M-%S")
def usage():
print("Usage: python lolcow_archiver.py Kick/Twitch/YouTube_URL Lolcow_Name [YouTube Community Post URL]")
print("\nFor Example:")
print("python lolcow_archiver.py https://www.youtube.com/@GlassWindow11/live Glass https://www.youtube.com/@GlassWindow11/posts")
print("python lolcow_archiver.py https://kick.com/clavicular/ Clavicular")
print("python lolcow_archiver.py https://kick.com/pwrworld/ PWR https://www.youtube.com/@PWRWorldOfficial/posts")
print("\n\n")
def check_for_livestream(live_url):
global NEW_STREAM_COUNT
script_dir = Path(__file__).resolve().parent
ytdlp_path = script_dir / "yt-dlp.exe"
if not ytdlp_path.exists():
print(time.strftime("%H:%M:%S", time.localtime()), f": Error: yt-dlp.exe not found at {ytdlp_path}")
sys.exit(1)
command = [
str(ytdlp_path),
"-o",
"%(title).50B [%(id)s].%(ext)s", # Limit filename length if stream title too long
"-t",
"mp4",
live_url,
]
print(time.strftime("%H:%M:%S", time.localtime()), ": Launching yt-dlp...")
try:
result = subprocess.run(command, check=False)
if result.returncode == 0:
NEW_STREAM_COUNT += 1
except Exception as e:
print(time.strftime("%H:%M:%S", time.localtime()), f": Error while running yt-dlp: {e}")
def check_for_new_post(lolcow_name, channel_posts_url, playwright):
global NEW_POST_COUNT
browser = playwright.chromium.launch(headless=True)
screenshot_prefix = f"{lolcow_name}_community_post_"
last_seen_file = f"{lolcow_name}_last_seen.txt"
# Set browser window size
context = browser.new_context(
viewport={"width": 1920, "height": 1080}
)
page = context.new_page()
try:
print("\n")
print(time.strftime("%H:%M:%S", time.localtime()), f": Checking community posts for {lolcow_name}...")
# Navigate to Community Posts
page.goto(channel_posts_url, wait_until="networkidle")
time.sleep(5)
# Grab newest post link
post_link = page.locator("a[href*='post/']").first
if not post_link.count():
print(time.strftime("%H:%M:%S", time.localtime()), ": No posts located.")
return
latest_url = post_link.get_attribute("href")
if not latest_url.startswith("http"):
latest_url = "https://www.youtube.com" + latest_url
print(time.strftime("%H:%M:%S", time.localtime()), ": Latest post found:", latest_url)
# Compare to last seen
last_seen = read_last_seen(last_seen_file)
if latest_url == last_seen:
print(time.strftime("%H:%M:%S", time.localtime()), ": No new posts detected.")
return
print(time.strftime("%H:%M:%S", time.localtime()), ": NEW POST DETECTED!")
NEW_POST_COUNT += 1
save_last_seen(last_seen_file, latest_url)
# Open post page
page.goto(latest_url, wait_until="networkidle")
time.sleep(4)
# Expand text if needed
try:
read_more = page.locator(
"tp-yt-paper-button:has-text('Read more'), "
"yt-formatted-string:has-text('Read more')"
)
if read_more.count() > 0:
read_more.first.click()
time.sleep(2)
print(time.strftime("%H:%M:%S", time.localtime()), ": Large community post detected. Expanding the full text of the community post.")
except:
pass
# Screenshot
screenshot_name = f"{screenshot_prefix}{timestamp()}.png"
page.screenshot(path=screenshot_name, full_page=True)
print(time.strftime("%H:%M:%S", time.localtime()), ": Screenshot saved:", screenshot_name)
finally:
context.close()
browser.close()
# ---------------- MAIN ----------------
def main():
capture_community_posts = False
if len(sys.argv) < 3:
usage()
sys.exit(1)
if len(sys.argv) == 4:
capture_community_posts = True
channel_posts_url = sys.argv[3]
live_url = sys.argv[1]
lolcow_name = sys.argv[2]
time.strftime("%H:%M:%S", time.localtime())
print("\n")
print("__________________________________________________")
print(f"Starting Lolcow Archiver for {lolcow_name}")
print(f"Checking for livestream at {live_url}")
if capture_community_posts:
print (f"Checking for YouTube Community Posts at {channel_posts_url}")
print("--------------------------------------------------")
with sync_playwright() as playwright:
while True:
try:
if capture_community_posts:
check_for_new_post(lolcow_name, channel_posts_url, playwright)
check_for_livestream(live_url)
except Exception as e:
print("ERROR:", e)
if capture_community_posts and NEW_POST_COUNT > 0:
print("\n")
print("**************************************************")
print(f"THERE ARE {NEW_POST_COUNT} NEW COMMUNITY POSTS FOR YOU TO ARCHIVE")
print("**************************************************")
if NEW_STREAM_COUNT > 0:
print("\n")
print("**************************************************")
print(f"THERE ARE {NEW_STREAM_COUNT} NEW STREAMS FOR YOU TO ARCHIVE")
print("**************************************************")
print("\n")
print(time.strftime("%H:%M:%S", time.localtime()), ": Sleeping for 30 seconds before checking again...\n")
time.sleep(30)
if __name__ == "__main__":
main()




One thing that would be incredibly useful would be some kind of generic captcha forwarding server/mobile app that would let you do this for your own automated processes by sending a notification to let you fill out a captcha on your phone, at least for the simple Google/Cloudflare Captchas where you just have to select a few boxes, in the same way that the services that do captcha solving via AI or infinite Indians do but self-hosted/with your own labour/compute. Is anyone aware of anything like this?This script does not automatically generate archive.md/Ghostarchive archives for the YouTube community posts. I wanted to do this initially, but with captchas being added, I'm not going to bother working around that.