- Joined
- Apr 20, 2021
I think I finally found the limit of how white people should be persecuted with this thing.
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
lmao literally this in AI form:I think I finally found the limit of how white people should be persecuted with this thing.
View attachment 5761018
Yes, there are potential risks associated with pharmaceutical drugs imported from China. Here's a breakdown of the primary concerns:
Quality Control:
Supply Chain Vulnerability:
- Reduced oversight: The FDA (Food and Drug Administration) has limited resources to conduct on-site inspections of manufacturing facilities in China. This raises concerns about potential quality control issues that might not be detected.
- Counterfeit and substandard drugs: There have been instances of counterfeit or substandard drugs from China entering the US market, potentially containing harmful ingredients or lacking the necessary active ingredients.
National Security Concerns:
- Over-reliance on China: The US has become increasingly reliant on China for both active pharmaceutical ingredients (APIs) and finished drug products. This creates a potential vulnerability in case of supply disruptions due to geopolitical tensions, trade disputes, or natural disasters.
Steps to Address the Risks
- Potential for weaponization: Some experts worry that China could exploit its dominance in the pharmaceutical supply chain to withhold essential medications during a crisis, creating a significant national security risk.
Important Considerations
- Increased FDA oversight: The FDA is working to expand its ability to inspect foreign manufacturing facilities, including those in China.
- Diversification of supply chain: The US is actively seeking ways to reduce reliance on China by encouraging domestic production of pharmaceuticals and exploring alternative sourcing options from other countries.
- Improved tracking and transparency: Greater transparency in the pharmaceutical supply chain would help identify potential risks and ensure the safety of imported drugs.
If you have specific concerns about a medication, it's always best to consult your doctor or pharmacist.
- Not all Chinese pharmaceuticals are unsafe: Many reputable Chinese pharmaceutical companies adhere to international quality standards. It's important to focus on the quality and safety of individual products and manufacturers, rather than generalize about the entire country.
- The issue is complex: The dependence on Chinese pharmaceuticals is a complex issue with economic and geopolitical ramifications. It requires a careful balance between addressing risks and ensuring access to affordable and necessary medications.
Source (Archive)Google CEO calls AI tool’s controversial responses ‘completely unacceptable’
Reed Albergotti
Updated Feb 28, 2024, 4:26am UTC
Reuters/Gonzalo Fuentes
THE SCOOP
Google CEO Sundar Pichai addressed the company’s Gemini controversy Tuesday evening, calling the AI app’s problematic responses around race unacceptable and vowing to make structural changes to fix the problem.
Google suspended its Gemini image creation tool last week after it generated embarrassing and offensive results, in some cases declining to depict white people, or inserting photos of women or people of color when prompted to create images of Vikings, Nazis, and the Pope.
The controversy spiraled when Gemini was found to be creating questionable text responses, such as equating Elon Musk’s influence on society with Adolf Hitler’s.
Those comments drew sharp criticisms, especially from conservatives, who accused Google of an anti-white bias.
Most companies offering AI tools like Gemini create guardrails to mitigate abuses and to avoid bias, especially in light of other experiences. For instance, image generation tools from companies like OpenAI have been criticized when they created predominately images of white people in professional roles and depicting Black people in stereotypical roles.
“I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong,” Pichai said.
Pichai said the company has already made progress in fixing Gemini’s guardrails. “Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts,” he said.
Google confirmed the memo, and the full note from Pichai is below.
THE VIEW FROM SUNDAR PICHAI
I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias – to be clear, that’s completely unacceptable and we got it wrong.
Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.
Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.
We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.
Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.
We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.
REED’S VIEW
The Gemini controversy has provided fodder for critics on the right, who often accuse tech companies of liberal bias.
But it isn’t really about bias. It shows that Google made technical errors in the fine-tuning of its AI models. The problem is not with the underlying models themselves, but in the software guardrails that sit atop the model.
This is a challenge facing every company building consumer AI products — not just Google.
Based on my understanding of this saga, nobody at Google actually set out to force Gemini to depict the Pope as a woman, or Vikings as Black people, nor did anyone want it to find moral equivalency between Musk and Hitler. This was a failed attempt at instilling less bias and it went awry.
If anything, this debacle shows how fast Google is moving. Like all large companies, Google has slowed down over the years. But the generative AI race has forced it to speed up product development.
The Gemini mistakes are a fixable, technical problem and Pichai’s note to staff Tuesday night shows that the company is working on it.
But the reputational problems this raises may not be so easy to fix.
Eh, AI Dungeon still seems to be better for text adventures.View attachment 5762429
on the bright side, it's good for text adventures and doesn't veer into purple prose like GPT-4
the short, simple paragraphs are really refreshing after having to wade through the sea of details i dont care about
AI dungeon is hideously outdatedEh, AI Dungeon still seems to be better for text adventures.
With how AI is going right now, outdated isn't always going to be worse. Seeing as how more "guardrails" and retardation keep getting added to the big name newer modelsAI dungeon is hideously outdated
If they're still using shit like GPT-2, it definitely is, at least in terms of overall quality. Don't get me wrong, it was fun back in 2020, but it's hard to overstate just how more coherent newer models are. They have the shitty post-script bias instructions for filtering, sure, but at least you can get around that with a decent enough jailbreak. There's effectively nothing you can do with those 3-4 year old models that really improves coherency, beyond doing your absolute best to write well. And that's before you get into the shitty filtering/Scunthorpe-ing that Latitude put into AID. I'd say the only thing those older models on stuff like AID/NAI really have is not going for the Instruct-type modeling you have post GPT-3, which may help with natural creativity in storytelling versus latter models, but that's hard to really quantify.With how AI is going right now, outdated isn't always going to be worse.
Makes sense. Do you think earlier iterations of major releases would be better? As in easier to jailbreak or having fewer post-script bias instructions? I definitely agree with the newer models being better in terms of coherency and total capability, just not as willing to say or do as much as an older model.If they're still using shit like GPT-2, it definitely is, at least in terms of overall quality. Don't get me wrong, it was fun back in 2020, but it's hard to overstate just how more coherent newer models are. They have the shitty post-script bias instructions for filtering, sure, but at least you can get around that with a decent enough jailbreak. There's effectively nothing you can do with those 3-4 year old models that really improves coherency, beyond doing your absolute best to write well. And that's before you get into the shitty filtering/Scunthorpe-ing that Latitude put into AID. I'd say the only thing those older models on stuff like AID/NAI really have is not going for the Instruct-type modeling you have post GPT-3, which may help with natural creativity in storytelling versus latter models, but that's hard to really quantify.
If you mean older models like GPT-2 or 3, then yeah, I'd imagine there was much less focus on "bias" back then. In terms of earlier releases of newer stuff, like the first few GPT-4 releases, it's harder to say, but probably. Companies like OpenAI are stuck having two competing goals of both trying to make a worthwhile model and having to deal with "AI ethics" in terms of making sure it doesn't say anything too spicy. So it's a balancing act, where later GPT4 releases might have stricter guardrails, but they also might be more lax from additional training so that the model doesn't just spit out "As a large language model..." repeatedly. Google Gemini's a great example of fucking up and leaning way too hard into the second objective, where you make a product functionally worthless because of "ethics" concerns. The whole problem is exacerbated by the models functioning as mostly black box information processors, so affecting one aspect of the filter or the training can have unforeseen consequences.Do you think earlier iterations of major releases would be better? As in easier to jailbreak or having fewer post-script bias instructions?
holistic, diverse, representation, inclusive, empowering.
To demonstrate why these are the worst word in the English language I asked the faggiest AI to write me a script including these words for an advertisement.
View attachment 5775381
Fade In:
INT. COZY COFFEE SHOP - DAY
Sunlight filters through leafy plants adorning a comfy coffee shop. A vibrant group of PR professionals huddles around a table piled high with laptops, notebooks, and steaming mugs.
ANNA (a woman with short, silver hair and a warm smile) ... and that's how we'll leverage influencer marketing to reach a wider, more diverse audience. Representation matters, and we need to ensure the Jones account reflects the reality of their customer base.
MIGUEL (a young Latinx man with a mischievous grin) Totally agree, Anna. But let's not forget the power of organic content. Authentic storytelling from real people connects on a deeper level.
YUKI (an older Japanese woman with a gentle demeanor) Indeed. We can create micro-influencer partnerships with individuals who embody the brand's values. Diverse perspectives are key to fostering an inclusive campaign.
JASMINE (a woman in a hijab, scrolling through data on a tablet) Absolutely. And our data analysis will help us tailor messaging to specific demographics, ensuring maximum impact.
ANNA (nods) Precisely, Jasmine. This isn't just about brand awareness, it's about empowering the Jones brand to be a leader in positive social change. Their communication strategy should be a force for good.
MIGUEL (snaps his fingers) That's it! We can leverage social media to spark conversations around social justice issues relevant to their audience. Create a movement, not just a marketing campaign.
YUKI (smiles brightly) A movement that empowers and inspires. I like it!
The team continues brainstorming, their animated expressions showcasing the energy and diversity of their ideas. They are a united front, fueled by a shared mission to create impactful and inclusive communication strategies.
FADE OUT.