- Joined
- Dec 20, 2021
I am working on turning this pizza into a shit post. Does that count?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
I'm pretty across things, so feel free to ask anything about anything. I've worked a lot with LLMs since like 2019 for some early university research projects that were being offered to students for testing.Trying to get into AI LLMs, I don't know if being your 30s turns your brain to mush but I'm having a really hard time understanding this alien tech crap.
Anyone else? what are you using?
Are you working in AI right now or just doing stuff as a hobby?I'm pretty across things, so feel free to ask anything about anything. I've worked a lot with LLMs since like 2019 for some early university research projects that were being offered to students for testing.
I run a company that does project work/integration which includes a LOT of LLM-related stuff (setting up local models for companies, setting up codebases to interact with OpenAI APIs, Claude 2 APIs), helping corporate developers understand LLMs and their use-case for internal projects, yada yada.Are you working in AI right now or just doing stuff as a hobby?
I love running my own site. I have a little forum setup where I can post (and others can too if they mysteriously decide to create an account). I really enjoy tinkering with it in my spare time.I want to make some game projects and a personal website
So is this a consulting/advisory type thing or your company is developing actual custom AI products based around those APIs? like are you selling them the finish product or just telling them what they can do with that?I run a company that does project work/integration which includes a LOT of LLM-related stuff (setting up local models for companies, setting up codebases to interact with OpenAI APIs, Claude 2 APIs), helping corporate developers understand LLMs and their use-case for internal projects, yada yada.
So our company does both, we implement but also do some advisory stuff (90% of it has been implementation). We do not develop custom AI models, we implement both API-accessible LLMs like ChatGPT, GPT-4 and Claude-2. We also do self-hosted LLM setups using VMs (much easier than Docker instances IMO) on Azure. All the other little things like setting up web server proxies, IP whitelisting, domain validation and then programmatic setups to interact with them (like micro-libraries that communicate with them (mainly in JavaScript and .NET/C#) come along with the work we do also.So is this a consulting/advisory type thing or your company is developing actual custom AI products based around those APIs? like are you selling them the finish product or just telling them what they can do with that?
It's a wild variety, so we've done 1 medical research internal LLM, 1 construction certification job for their clients, 1 tertiary education for student placements and a tourism/rental company chatbot for bookings/information. Most of them are just GPT-4 API hookups using JavaScript/TypeScript, but the medical one was a bit of a doozy since they had stringent requirements for things like SOC-2 compliance, Hippa compliance and other government regulations (they were using the LLM to interact with information about ongoing research logs with volatile masses/nuclear medicine).If you don't mind me asking what kind of companies are hiring you for this?
A lot of these companies have incompetent dev teams, or just a skeleton crew that manages infrastructure. A lot of these firms either use cloud-services (mainly Azure) for identity management (Entra ID) or have their own on-prem infrastructure for government regulations (medical mainly).Anyway, what else? law firms? accounting? trading I assume they have their own in-house department given how dependent they are on bots.
I mean that would be really expensive, easier to train an existing model, or you mean you don't train them either?We do not develop custom AI models
Ever had a costumer who asked you for a hardware solution to run on-site?We also do self-hosted LLM setups using VMs (much easier than Docker instances IMO) on Azure.
Have you tried Mistral? how did it go?like ChatGPT, GPT-4 and Claude-2
I take that's one of those 'just the API' cases? because there are already tons of companies offering those customer support chatbots, tho most are abysmal.and a tourism/rental company chatbot for bookings/information
No surprise given all the high profile data leaks/theft going on with medical records.but the medical one was a bit of a doozy since they had stringent requirements for things like SOC-2 compliance, Hippa compliance and other government regulations
Wait you say connections and not marketing matters but you did that job for 4 times less and added (what I assume is) long-term support too, that seems a really good deal for the client more than a matter of connections and networking, hard to pass on that offer when the other one is half a mil and "good luck with whatever happens".A bit of a tangent but I've found that marketing does not work in this business, only networking and personal connections are key. We've beaten larger consultancies/firms in bidding for these jobs because we bid not only so much lower than them (some firm quoted like 500k for that job whereas we're doing it for like 125k), but we give a shit about the outcome. We're not going to just build to the spec and then leave it, we actually reach out and enquire about usage and if it's fit their expectations. Kindness goes a long way in this business.
Did they ask you to build them an "LLM in a box" (eg: a blade with a few A100s) or just make it run in the existing hardware they got?.or have their own on-prem infrastructure for government regulations (medical mainly).
A guy I know worked at a major chemical company still uses an old ass program from the early 1960's written in FORTRAN to control a bunch of industrial mixers. I remember the story because he told me that like 10 years ago they had to go looking among retired college professors to find someone who could look at the code to make some changes because all the guys who coded the original program were dead like a dodo.You'd be shocked at how many are still using SharePoint/Power Apps to run critical internal processes.
You went above and beyond bro, thanks!Hopefully that answers your questions![]()
We've never been asked to train them and personally I don't see the point. RAG is much better than fine-tuning in my opinion.I mean that would be really expensive, easier to train an existing model, or you mean you don't train them either?
Not yet, we've had one customer who told us they did, only to discover just a bunch of old Titan GPUs that were used for data analysis and modelling, so that was not a fun call telling them that it can't go ahead and the project was dead in the water.Ever had a costumer who asked you for a hardware solution to run on-site?
It's good, I would say it's comparable to Claude 2 when properly prompted.Have you tried Mistral? how did it go?
Yeah these guys just wanted a chatbot so that customers could ask simple questions instead of forking out a ton of time/money for some bespoke booking management. They were also very cheap and didn't want to pay for something like Intercom so we built just a basic chatbot with RAG. Nice little web job.I take that's one of those 'just the API' cases? because there are already tons of companies offering those customer support chatbots, tho most are abysmal.
Yes that's what their main concern was. They're the main research group for a lot of hospitals and turn over a TON of research grants, so it would be disastrous if all that got leaked.No surprise given all the high profile data leaks/theft going on with medical records.
Ok I worded this terribly. A lot of the time cost is not actually the main factor in big client decision making (in my experience), it's about if you can trust them to do the job properly. We do not compete on price as it makes you appear comparable to other vendors, so a potential client will then just pick whoever has a deeper track record. We have been around for a couple of years and do not have hundreds of clients/use-cases behind us. We have about 10.Wait you say connections and not marketing matters but you did that job for 4 times less and added (what I assume is) long-term support too, that seems a really good deal for the client more than a matter of connections and networking, hard to pass on that offer when the other one is half a mil and "good luck with whatever happens".
Am I missing something here?
Mentioned above and obviously that did not work, but yeah we're doing a job right now which involves us setting up a self hosted instance of Mistral on their brand-spankingly new DELL EMC E9680 server, it's a fucking monster and I'm so excited to SSH into it and just admire the specs, it's cost them like a million bucks or something absurd to purchase. It's designed to power their internal AI applications that they're building for their different teams (from what I've heard it's mainly internal customer support for employees and phasing them out LOL).Did they ask you to build them an "LLM in a box" (eg: a blade with a few A100s) or just make it run in the existing hardware they got?.
Hey no worries man, always happy to share what I can. I really enjoy the unique problems that we face and the solutions we can implement. Hopefully we can get to a point where I can simply exit and make some solid money, but it's fucking brutal when you're small with 5 guys and all under the age of 30.You went above and beyond bro, thanks!
Kind of looks like one of those xscreensaver programs.- snip -
I was working on getting Windows 98 to run on a modern machine. No VM, on bare metal.
There it is.
View attachment 5602042
Installing and running it on anything newer than a Socket 775 machine is pretty tricky and requires a fairly specific hardware and software combination. It took me awhile to get it right, but eventually I was successful.
That's nice. I never moved passed DVD and while today I'm not exactly pleased with how it looks blown up on my "Full HD" monitor, though I do appreciate having a physical copy.I recently got this old first gen blu ray player at a thrift store for 10 bucks. It has component, composite, S video, and HDMI. I have a CRT that can play at full 480p. 20 bucks later getting a remote and power cable, hooked it into the S video port, and damn, it looks really, really good. Subtitles look great, I'll tell you that much.
Honestly I love Blu-ray and physical media in all forms. Have tapes still, and on the opposite end 4k discs, for a better screen of course. Streaming and digital is alright, it's just the bit rate issue that blowsThat's nice. I never moved passed DVD and while today I'm not exactly pleased with how it looks blown up on my "Full HD" monitor, though I do appreciate having a physical copy.