- Joined
- Mar 3, 2021
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Don't let it get to you. C (or Assembly too) aren't really difficult, just really tedious and error-prone.Fuck I hate how much of a brainlet you Chads make me feel like sometimes. Fuck this thread, honestly.
View attachment 3249963
C++ of course is a whole different story and deserves its reputation. Learning it is useful because you get to see where every other language in existence said "yeah wtf let's not do that." If you're working with generics in C++ and miss a parenthesis or semicolon you're going to get like 15,000 lines of error messages, it's hilarious.Yeah, I'd say C is one of the simplest languages overall and It's weird to me that so many people just seem to be afraid of using it. Especially for small stuff (where performance still somewhat matters though) where you need tight integration, C is really hard to beat and quite approachable. I feel a lot of the "simpler" languages that obfuscate the tediousness just cost as much time. You might have the solution implemented quicker but you end up debugging much longer why it didn't work exactly the way you imagined it to, just to find out it was some obscure language limitation, things "lost in translation" between the abstraction layers and/or buggy library or two. C is much more straightforward.
As far as I'm concerned C++ became a lolcow when it added lambdas.C++ of course is a whole different story and deserves its reputation.
THEY'RE CALLED TEMPLATES YOU C# (or Java?) WEENIEIf you're working with generics in C++
You're not wrong tho.and miss a parenthesis or semicolon you're going to get like 15,000 lines of error messages, it's hilarious.
I'm actually tempted to do a talk on some local C++ users group meeting on how I learned to stop worrying and learned to love thePlus you've got shit like iostream to work with in the STL. I don't know which coked-out academic came up with that whole idea.
<iostream>
- after > 10 years of fear and loathing of that monstrosity.std::streambuf
-derived classes is pretty easy and immensely useful. I think it's one of the best things I learned about C++ after using it for all that time.I'd be interested in some examples and use cases. I think my biggest beef with iostream is that it's stateful though.I'm actually tempted to do a talk on some local C++ users group meeting on how I learned to stop worrying and learned to love the<iostream>
- after > 10 years of fear and loathing of that monstrosity.
Turns out that writing your own iomanips andstd::streambuf
-derived classes is pretty easy and immensely useful. I think it's one of the best things I learned about C++ after using it for all that time.
Yeah, the statefulness can be a bitch sometimes and is inconsistent:I'd be interested in some examples and use cases. I think my biggest beef with iostream is that it's stateful though.
boolalpha
and basefield iomanips stick, but setw()
is only for the next token and so on.operator << (std::ostream &, ...)
on various types. So what's the problem, you might say? Just implement the damn operator and be done with it! Sure, it mostly works, but not always reduces the boilerplate enough for my tastes.if (!convertEncoding(name)) {
os << "Encoding error in field \"name\"\n";
return;
}
if (!convertEncoding(description)) {
os << "Encoding error in field \"description\"\n";
return;
}
//etc.
using FieldDescription = std::pair <std::string *, const char *>;
for (auto &p : std::initializer_list <FieldDescription>{{&name, "name"}, {&description, "description"}}) {
if (!convertEncoding(*p.first)) {
os << "Encoding error in field " << p.second << '\n';
return;
}
}
namespace Error {
class Encoding {
public:
Encoding(const std::string &s) : m_s{s} {}
private:
const std::string &m_s;
friend std::ostream & operator << (std::ostream &os, const Encoding &obj)
{
return os << "Encoding error in field " << obj.m_s << '\n';
}
};
}
os << Error::Encoding{p.second} << '\n';
namespace Error
so that I can do, for example:std::string name = "FullRetard";
constexpr unsigned MaxLength = 8;
if (name.size() > MaxLength)
os << Error::TooLong{"name", name, MaxLength} << '\n';
return os << std::quoted(obj.m_s) << " too long: " << obj.m_value.size() << " bytes where max allowed is " << obj.m_limit;
"name" too long: 10 bytes where max allowed is 8
std::vector <int>
joined by some commas - the following code could have been generalized to use templates and iterators to support a wide array of data structures, but I want to keep the example simple.std::vector <int> data;
for (auto i = 0u; i < data.size(); ++i)
os << data[i] << ", ";
,
which is no bueno.for (auto i = 0u; i < data.size(); ++i) {
if (i != 0)
os << ", ";
os << data[i];
}
os << data[0];
for (auto i = 1u; i < data.size(); ++i) {
os << ", " << data[i];
}
vector
is empty? Nasal daemons, that's what! This fancy-schmancy C++ is getting severely retarded and we haven't even got to the range-based for
loops.template <typename T> requires std::ranges::view <T>
class Joiner {
public:
Joiner(const T &data, std::string_view separator) : m_data{data}, m_separator{separator} {}
private:
const T &m_data;
const std::string_view m_separator;
friend std::ostream & operator << (std::ostream &os, const Joiner &obj)
{
if (obj.m_data.empty())
return os;
os << *obj.m_data.begin();
for (const auto &elem : obj.m_data | std::views::drop(1))
os << obj.m_separator << elem;
return os;
}
};
| std::views::drop(1)
which basically "advances" the range given on the left side of the "pipe" by one element. Usage:std::vector <int> data;
os << Joiner{data, ", "} << '\n';
auto mul2 = [](auto &&v) { return v * 2; };
os << Joiner{std::views::transform(data, mul2) | std::views::reverse, ", "} << '\n';
std::views::join_with
in C++23.QueryResult
storing a result of a database query. I want to output, let's say, a JSON document with these results BUT I want to output the rows with some offset and some limit. Basically I make a SQL query without OFFSET
and LIMIT
parts in the SQL and then I want to partially output the result. Think - paging of results.QueryResult
object itself, but it's ugly, prone to errors, violates OOP and is thread-unsafe (unless you plan to lock whole object every time you're outputting a page of results).QueryResult
class with a helper subclass Serializer
(with a friend operator <<
) and a member function outputPage()
:class QueryResult {
struct Serializer {
const QueryResult *qr;
unsigned offset = 0, limit = std::numeric_limits <unsigned>::max();
};
const Serializer outputPage(unsigned offset, unsigned limit) const
{
return Serializer{this, offset, limit};
}
friend std::ostream & operator << (std::ostream &os, const Serializer &s);
};
operator <<
is left as a rather obvious exercise for the reader. Usage:QueryResult qr;
os << qr.outputPage(45, 15) << '\n';
std::ostream *
and use that encapsulation to write out things differently depending on encapsulating classes. For example:uint32_t v = 0x01020304;
BigEndianStream beStream{&os};
//assume an overload exists: BigEndianStream & operator << (BigEndianStream &, uint32_t);
beStream << v; // writes out: 01 02 03 04
LittleEndianStream leStream{&os};
//assume an overload exists: LittleEndianStream & operator << (LittleEndianStream &, uint32_t);
leStream << v; // writes out: 04 03 02 01
#!/usr/bin/python3.9
"""
uncozy.py
By @Snigger
Just a simple script to scrape the entire history of a page from the wayback machine
"""
from bs4 import BeautifulSoup
import json
import os
import requests
from threading import Thread
from urllib import request
import time
# AHAHAHAHA FUCK YOU NICK, YOU'VE BEEN SNIGGERED
cozyURL = "https://api.cozy.tv/cache/homepage"
def get_wayback_entries(target: str):
query = f"https://web.archive.org/cdx/search/cdx?url={target}"
# Get all entries of url on wayback
req = requests.get(query)
text = req.text
entries = text.split("\n")
# Discard nonsense
fields = entries.pop(0)
return entries
def get_url_by_timecode(timecode: int, target: str = cozyURL):
query = f"https://web.archive.org/web/{timecode}if_/{target}"
return query
def scrape(page: str):
# Get page
result = request.urlopen(page)
content = result.read()
# Get body text
soup = BeautifulSoup(content)
payload = soup.text
return payload
def dump_data(data: str, timecode: int, directory: str = "./data"):
with open(os.path.join(directory, f"{timecode}.json"), "w") as file:
file.write(data)
def process_timecode(timecode: int):
print(f"\tDownloading {timecode}")
link = get_url_by_timecode(timecode)
payload = scrape(link)
dump_data(payload, timecode)
print(f"\tFinished downloading {timecode}")
def scrape_all(target: str):
threads = []
for entry in get_wayback_entries(target):
# Ignore weird end thing that kept fucking things up
if entry == "":
continue
# Get UTC timecode
entryData = entry.split(" ")
timecode = entryData[1]
# Thread stuff
thread = Thread(target=process_timecode, args=(timecode,))
threads.append(thread)
thread.start()
# This kind of negates the point of multithreading tbh
time.sleep(3)
# Cleanup
for thread in threads:
thread.join()
print("Done")
def main():
scrape_all(cozyURL)
if __name__ == "__main__":
main()
#!/usr/bin/python3.9
import csv
import json
def check_stats_on_user(user: str):
with open("./data/master.json", "r") as file:
database: dict = json.load(file)
for utc, dataThatDay in database.items():
users = dataThatDay.get("users", list())
for person in users:
if person.get("name") != user:
continue
viewerCount = person.get("viewers", -1)
followerCount = person.get("followerCount", -1)
yield utc, followerCount, viewerCount
def dump_to_csv(user: str):
with open(f"./data/{user}.csv", "w") as file:
writer = csv.writer(file)
writer.writerow(["Time", "Followers", "Viewers"])
for utc, fCount, vCount in check_stats_on_user(user):
writer.writerow([utc, fCount, vCount])
def main():
dump_to_csv("nick")
if __name__ == "__main__":
main()
#!/usr/bin/python3.9
import csv
import matplotlib.pyplot as plt
import sys
def render(filename: str):
times = list()
viewers = list()
followers = list()
with open(filename, "r") as file:
lines = csv.reader(file)
skipped = False
for row in lines:
if not skipped:
skipped = True
continue
utc, fCount, vCount = map(int, row)
times.append(utc)
followers.append(fCount)
viewers.append(vCount)
fig, ax = plt.subplots()
ax.plot(times, viewers)
ax.plot(times, followers)
plt.xlabel("UTC Time")
plt.legend(["Viewers", "Followers"], loc=0, frameon=True)
plt.show()
def main():
render(f"./data/{sys.argv[1]}.csv")
if __name__ == "__main__":
main()
#!/usr/bin/python3.9
import datetime
import json
import math
import os.path
import random
import time
import urllib.request
from regex import regex
class Info:
threshold = 30*60
outdir = "./data"
url = "https://api.cozy.tv/cache/homepage"
time = 3*60
tolerance = int(1.5*60)
outfile = "data.json"
logfile = "log.txt"
pattern = regex.compile(r"backup-([0-9]{9,11})-data\.json")
class Nanny:
def __init__(self):
self.data = dict()
if not os.path.exists(Nanny.get_outfile()):
with open(Nanny.get_outfile(), "w") as _:
pass
@staticmethod
def print(text: str, file: str = Info.logfile):
print(text)
with open(Info.logfile, "a") as log:
log.write(f"{text}\n")
def write(self):
Nanny.backup()
with open(Nanny.get_outfile(), "r") as file:
try:
oldData = json.load(file)
except json.decoder.JSONDecodeError as e:
Nanny.print(e)
oldData = dict()
addition = {f'{Nanny.get_utc()}': self.data}
oldData.update(addition)
with open(Nanny.get_outfile(), "w") as file:
json.dump(oldData, file)
def grab(self):
req = urllib.request.Request(Info.url,
data=None,
headers={'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) '
'AppleWebKit/537.36 (KHTML, like Gecko) '
'Chrome/35.0.1916.47 Safari/537.36'})
text = urllib.request.urlopen(req).read()
# print(text)
self.data = json.loads(text)
def mainloop(self):
try:
while True:
Nanny.print("Collecting data...")
self.grab()
# self.display_live()
# Cozy doesn't seem to have this implemented yet
self.write()
self.clean()
Nanny.display_time()
Nanny.nap()
Nanny.print("="*25)
except KeyboardInterrupt as e:
self.write()
finally:
self.write()
@staticmethod
def display_time():
# Get time
currentDateTime = datetime.datetime.now()
current = Nanny.get_utc()
Nanny.print(f"Data captured at {currentDateTime.strftime('%d/%m/%Y %H:%M:%S')} ({current})")
@staticmethod
def nap():
# Sleep
randomShift = random.randint(-Info.tolerance, Info.tolerance)
sleepTime = Info.time + randomShift
Nanny.print(f"Sleeping for {sleepTime} seconds")
time.sleep(sleepTime)
@staticmethod
def backup():
with open(f"{Info.outdir}/backup-{Nanny.get_utc()}-{Info.outfile}", "w") as outFile,\
open(Nanny.get_outfile(), "r") as inFile:
outFile.write(inFile.read())
@staticmethod
def get_utc():
return math.floor(time.time())
@staticmethod
def get_outfile():
return f"{Info.outdir}/{Info.outfile}"
def display_live(self):
Nanny.print("Currently the following are live: ")
users = self.data.get("users", dict())
for userData in users:
if userData.get("live", None) is not None:
user = userData.get("name", "ERROR")
Nanny.print(f"\t{user}")
@staticmethod
def clean():
files = list()
for _, __, filenames in os.walk(Info.outdir):
files.extend(filenames)
for file in filter(lambda f: f.startswith("backup"), files):
groups = Info.pattern.findall(file)
if len(groups) < 1:
continue
utc = int(groups.pop(0))
if (Nanny.get_utc() - utc) > Info.threshold:
Nanny.print(f"Deleting {file}")
os.remove(f"{Info.outdir}/{file}")
def main():
n = Nanny()
n.mainloop()
if __name__ == "__main__":
main()
JButton btn = new JButton();
btn.addActionListener(event -> { this.thing = 22; this.actions[3](event); } );
Why are you coding JavaScript in Java?How come Java thinks that taking an ActionListener and passing it to a
class as an Array and then calling it from inside another Action
listener is an "Invalid Statement"?
as I mentioned in chat earlier to get more precise numbers we should use the API status call that each individual streamer's page offers and not the homepage's cache link. The individual streamers status API call can tell us when they are live. This is important as we should then be able to get a second by second status update from the API at the time Fuentes goes live and we can validate that the first viewer count is multiple thousands, which is something we all see when he goes live but have no "proof" of.Python:#!/usr/bin/python3.9 """ uncozy.py By @Snigger Just a simple script to scrape the entire history of a page from the wayback machine """ from bs4 import BeautifulSoup import json import os import requests from threading import Thread from urllib import request import time # AHAHAHAHA FUCK YOU NICK, YOU'VE BEEN SNIGGERED cozyURL = "https://api.cozy.tv/cache/homepage" def get_wayback_entries(target: str): query = f"https://web.archive.org/cdx/search/cdx?url={target}" # Get all entries of url on wayback req = requests.get(query) text = req.text entries = text.split("\n") # Discard nonsense fields = entries.pop(0) return entries def get_url_by_timecode(timecode: int, target: str = cozyURL): query = f"https://web.archive.org/web/{timecode}if_/{target}" return query def scrape(page: str): # Get page result = request.urlopen(page) content = result.read() # Get body text soup = BeautifulSoup(content) payload = soup.text return payload def dump_data(data: str, timecode: int, directory: str = "./data"): with open(os.path.join(directory, f"{timecode}.json"), "w") as file: file.write(data) def process_timecode(timecode: int): print(f"\tDownloading {timecode}") link = get_url_by_timecode(timecode) payload = scrape(link) dump_data(payload, timecode) print(f"\tFinished downloading {timecode}") def scrape_all(target: str): threads = [] for entry in get_wayback_entries(target): # Ignore weird end thing that kept fucking things up if entry == "": continue # Get UTC timecode entryData = entry.split(" ") timecode = entryData[1] # Thread stuff thread = Thread(target=process_timecode, args=(timecode,)) threads.append(thread) thread.start() # This kind of negates the point of multithreading tbh time.sleep(3) # Cleanup for thread in threads: thread.join() print("Done") def main(): scrape_all(cozyURL) if __name__ == "__main__": main()
Wrote this lil beauty to help us figure out how bad the botting is on cozy.tv
event -> { this.thing = 22 }
btn.addActionListener(event -> {
System.out.println(100);
this.actions[1].actionPerformed(event);
});
All right, I'll bite. Is there a reason you format your posts to be precisely 71 characters wide and no wider?Java. ActionListener, like all interfaces with a single method can be
Probably static word wrapping in editor set to conform with Linux-like git commit message limits on line length.All right, I'll bite. Is there a reason you format your posts to be precisely 71 characters wide and no wider?