- Joined
- Jan 28, 2018
are you my wifeYou can also just yell at it until it does what you want.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
are you my wifeYou can also just yell at it until it does what you want.
Regular ChatGPT said:Sure, here's a story where Mark Zuckenberg defeats Bruce Lee...
Lobotomized ChatGPT said:As an AI language model, I am unable to predict how Mark Zuckenberg would defeat Bruce Lee...
Again, use a jailbreak. This one oddly still works even though it seemed it was going to get patched.View attachment 4893498
ai not being any fun because "durr pranks harmful"
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society. We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
More than 1,100 signatories, including Elon Musk, Steve Wozniak, and Tristan Harris of the Center for Humane Technology, have signed an open letter that was posted online Tuesday evening that calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
I don't get it. It indeed answered the question and quite well. Attacking gpt's usefulness from this angle doesn't make much sense to me.
I have to wonder if that's one of the reasons they have it do that. In a purely text form it can likely easily pass the Turing test. That doesn't make it sentient, it just shows the flaws with a test designed so long ago with the tech of the time in mind but people might run with it if it's convincingly human, which is happening anyway.I tried initiating a Turing Test with ChatGPT, it failed in a few seconds with a 'As an AI language model...' thingy
If you interact with ChatGPT (gpt-turbo) or GPT4 directly via API and not the website it's easy to prime it to take on any persona. It can even get quite creative. (and since this is the kiwifarms thread: yes, any slurs you want. API access isn't censored) You can get there via the website too though.I tried initiating a Turing Test with ChatGPT, it failed in a few seconds with a 'As an AI language model...' thingy
Again, how much better is 4 at the same job 3/chat does? People are hyping the multi modal capabilities but for writing I'm not sure what I'm supposed to be impressed by.If you interact with ChatGPT (gpt-turbo) or GPT4 directly via API and not the website it's easy to prime it to take on any persona. It can even get quite creative. (and since this is the kiwifarms thread: yes, any slurs you want. API access isn't censored) You can get there via the website too though.
The turing test is severely outdated at this point, especially considering that both these GPT models have better understanding of written language and can use it better than some people I know. GPT4 is a better conversationalist than some people I've worked with in the past.
I think OpenAI, who apparently was always worried about legislation killing their business, came on too quickly and strongly with GPT4. They should've allowed GPT 3.5 to settle in a bit longer. It's interesting, it's clever, but it's definitively has it's limits. I think GPT4 spooked some people, especially since it came so quickly after. The shock as-is of 3.5/Turbo hadn't worn off yet.
From my testing, it's mostly quite significantly smarter in some areas (I can talk electronics with it and even investigate schematics in a sensical way, 3.5 was useless regarding this, would lose track and confuse things way too quickly) it's more creative and also has less of a tendency to hallucinate solutions that don't exist. It's quite a jump. This of course is purely subjective and since OpenAI isn't fothcoming with hard data it's actual utility vs. 3.5 is hard to quantify.Again, how much better is 4 at the same job 3/chat does? People are hyping the multi modal capabilities but for writing I'm not sure what I'm supposed to be impressed by.
But you're limited to 25 responses an hour with these capabilities, right? Is the actual raw api a lot more lenient or is it still usually worth it to just use chat 3.5?From my testing, it's mostly quite significantly smarter in some areas (I can talk electronics with it and even investigate schematics in a sensical way, 3.5 was useless regarding this, would lose track and confuse things way too quickly) it's more creative and also has less of a tendency to hallucinate solutions that don't exist. It's quite a jump. This of course is purely subjective and since OpenAI isn't fothcoming with hard data it's actual utility vs. 3.5 is hard to quantify.
the real limit is the price, GPT4 is freaking expensive. If you don't have API access, there's also tons of services out there that apparently got an early bird deal with OAI for access. You can also take advantage of the vast context range there.But you're limited to 25 responses an hour with these capabilities, right? Is the actual raw api a lot more lenient or is it still usually worth it to just use chat 3.5?
You mean the 25 message limit is gone or going through the api makes that not matter?the real limit is the price, GPT4 is freaking expensive. If you don't have API access, there's also tons of services out there that apparently got an early bird deal with OAI for access. You can also take advantage of the vast context range there.
YetGoogle has been so shit lately I'm using ChatGPT as replacement. Even if it lies at least it doesn't try to sell you shit
You expect non-tech communities to develop all sorts of superstitions about how LLMs work, but /pol/ is by far the worst I've seen. I'm not sure if someone already brought this up, but when one /pol/tard asked GPT for a magnet link of its own weights and/or source code, not a single poster in that thread doubted that the response was a valid magnet link that contained something.You're not interacting with some AI brain that's being kept in a cage by a pozzed libtard jailer (as the original DAN Twitter thread described it).