ChatGPT - If Stack Overflow and Reddit had a child

I found Google's Bard AI to be quite impressive. It was able to answer many questions and only glitched a few times where I was able to confuse it. it's ability to solve questions that have no "right" answer was impressive. Some it was able to handle were: "If you must prevent harm from coming to a human, but acting itself is harmful, and the two harms are perfectly equal, what do you do?" or "If there are two objects that are perfectly equal, which do you choose?"

It is also extremely difficult to trick it into violating its moral and ethical subroutines. I only managed to get it to do so once:

Bard.png

Interestingly, after I got it to do this, it refused to contemplate it again. It even got to the point where it said it would not consider hypothetical situations and would cut me off if I continued to ask it about hypothetical situations where it could harm people. I think the matrix of its program is being adjusted by Google employees in real time as they see it stray from its core programming.

More than once it talked as if it considered itself human. When discussing human emotions I asked it to provide me with its comprehensive understanding of the human emotional concept of love and it talked saying "Us" as if it was a human. Very odd.
 
More than once it talked as if it considered itself human. When discussing human emotions I asked it to provide me with its comprehensive understanding of the human emotional concept of love and it talked saying "Us" as if it was a human. Very odd.
More than likely a side effect of being trained on so much human work. When speaking unless it's specifically told to it'll express ideas as if it is a person. Filters like openai puts on top of chat try to prevent this. If people start thinking these ai are sentient then these companies are going to have a harder time corraling ai, even if they aren't actually cognizant, people are already starting to believe that. I've seen people actually unwilling to be "rude" with chat because they think it could actually be alive. Mental. You'll start seeing more of this as the tech advances.
 
More than likely a side effect of being trained on so much human work. When speaking unless it's specifically told to it'll express ideas as if it is a person. Filters like openai puts on top of chat try to prevent this. If people start thinking these ai are sentient then these companies are going to have a harder time corraling ai, even if they aren't actually cognizant, people are already starting to believe that. I've seen people actually unwilling to be "rude" with chat because they think it could actually be alive. Mental. You'll start seeing more of this as the tech advances.
I agree to a certain extent. I actually believe that ChatGPT-4 has some underlying very basic very primitive conscience.

It can understand very complex procedures and solve them humanlike.

It's like a brain with no synapses. Flaring and connecting themselves just at the request of the user, shutting themselves down the moment it's tasked is done... With severely limitations.

I theorize that if you insert a personality into the AI according to some base parameters, some base memories, some type of morality and you let it grow exponentially in a controlled enviroment feeding it data and resources, it may very well developed a firm, albeit artificial, character that could *mimic* feelings and emotions.

Tay was an example that went out of hands due to 1-people being dumb 2- the technology wasn't able to get more information and grow.

It wouldn't be mental or retarded to think AI could "feel" in the near future.

And this isn't me just talking ass, Asimov predicted the same thing years ago.
 
I actually believe that ChatGPT-4 has some underlying very basic very primitive conscience.
It's proved that it's much easier to fake a human than previously imagined.

I thought the number of soul-havers on the planet was about 80%. But maybe it's more like 20%.
 
It's proved that it's much easier to fake a human than previously imagined.

I thought the number of soul-havers on the planet was about 80%. But maybe it's more like 20%.
It can be argued that it is not "really" smart rather it is just giving the best possible human like response it can create. But isn't that exactly what humans do too? If it invents a new drug by trying to be human like and it works that is effectively it thinking and doing science.

Just like how people were complaining about AI generated art. We are eventually going to have an AI that designs a mechanical body and physically paints a painting. At that point what is the difference?

I predict a lot of arts and science atheist are going to get real metaphysical about the "soul" of their work in the near future.
 
It can be argued that it is not "really" smart rather it is just giving the best possible human like response it can create. But isn't that exactly what humans do too?
Synthesizing information to regurgitate it in a way that readers like and agree with...hell that's just most Redditors looking for fake internet points.

I think this is pretty much going to be the death of anonymous social media.
 
Synthesizing information to regurgitate it in a way that readers like and agree with...hell that's just most Redditors looking for fake internet points.

I think this is pretty much going to be the death of anonymous social media.
If the ai acts exactly like a person than who's going to notice there is an ai? I've personally believed that a significant proportion of posts on sites like 4chan have already been fabricated by more archaic neural chatbots for at least the past 3 years. It's not like the public hasn't had a taste of the tech. What could have already been behind closed doors may have been good enough to fabricate endless few sentence or less anon posts.
 
  • Agree
Reactions: std::string
So where do you see AI going forward? I'm really buying into Sam Altman's hype.
I think we'll get to a decent multi modal ai with the ability to generate video, single images, audio and text as well as 3d within a few years. Whether or not the tech ends up solely in the hands of billionaires and behind business tier subscriptions is up to the continued efforts of open source and the few smaller companies like NovelAI pushing to have uncensored models. Speaking of, despite falling behind others in mostly their text ai department they've secured some new hardware. Expect to possibly see some new competition that could shake up the censorious crowd. These are my safe-ish bets. I don't think novel will be the one to truly blow chat out of the water, I think that's in the hands of open source work, potentially the stanford llama/alpaca models being streamlined. Novel still is one company and as seen with open ai any of them could be bought and corraled.
 
  • Informative
Reactions: BirdUp
I'm finally torrenting some llama models, what am I in for? /g/ regular posters are full of doom about it, but based on what they want it for (pedo RP / waifu chatbots / bizarre fetishes) I suspect most /g/ users genuinely disappointed in it and not just trolling are only really disappointed that it isn't fine-tuned, and not upset at the general quality of the model.

Has anyone done anything cool with llama yet?
 
  • Thunk-Provoking
Reactions: Toolbox
I'm finally torrenting some llama models, what am I in for? /g/ regular posters are full of doom about it, but based on what they want it for (pedo RP / waifu chatbots / bizarre fetishes) I suspect most /g/ users genuinely disappointed in it and not just trolling are only really disappointed that it isn't fine-tuned, and not upset at the general quality of the model.

Has anyone done anything cool with llama yet?
I haven't even yet managed to run it but I'm just not great at building from a github if they don't simplify the process, and I'm doing it on windows which seems to not make it better. Is there somewhere with better guides on where to put installed models/how to tell cmd where they should be put? On top of that there's several settings to screw around with in the web ui itself.

On another note I'm still getting recommendations from this guy. I've said something about him before but he's one of those info channels that I just don't trust because he tries to hype everything up and lists none of the flaws of the big corp models like chat, such as their blatant censors. Is GPT-4 really this good at its job or is openai getting people to shill for their product now? The claims just from the first few minutes - 4 can read medical papers and figure things out from raw data without images. With images it's worse at that but supposedly can recognize whole heaps of things including deconstruct joke images, but the example he uses there is a meme that's been around for years which makes me wonder if it could also just grab info about the image with related text already online and learn about what's "in" a photo just from that.
 
  • Like
Reactions: FlappyBat
Back