Pendulum Swing Watch 2025

  • 🐕 I am attempting to get the site runnning as fast as possible. If you are experiencing slow page load times, please report it.
If I was a zoomie I'd be bitter as fuck having the prime years of my life stolen from me and forced to live under retarded race/gender/covid totalitarianism only to have things start to snap back to 90s normality right when college is over and it's time for wagie to get back in cagie.

Sadly they'll learn zero lessons and vote en masse for the next ugly Communist nigger the DNC shits out.
 
  • Zucc says he <3 free speech.
  • 3rd Party Fact Checkers "have done more harm than good".
  • Automated censorship systems are a failure.
  • Cites inclusive policies about "gender" specifically as being "outdated with discourse".
  • Non-criminal behavior will require a human report before any action is made.
  • People want more political news that they've been squashing.
  • Zucc specifically says "We're moving our report service team from California to Texas" and I literally laughed out loud.
  • "We will work with President Trump".
I feel optimistic about the future, but I wouldn't be surprised if the politispergs fucked it up.
contempt.png
 
Thunderdomers love Dana because like them, he worships Donald Trump. So this post will upset a bunch of folks
It's really revealing that simple association with Trump in any context constitutes "worship" to you
The messianic attitude the left takes towards their political figures and the way they project that onto their opposition is fucking embarrassing
 
Only a problem now that it's hurting the bottom line. If only corpo's could just have a little bit of balls and stand by what's so obviously right from every part of the retard spectrum. Like you're saying thought policing is a bad idea? No shit, Sherlock.

Doubt this will completely resolve the issue, but hopefully it means shadow bans on blacklisted words or what not will be removed. allowing shit to be context dependent on if it will be removed or not.
less punjab moderation the better they can barely understand english let alone moderate it.
 
It will never be enough until they stop their racially and sexually discriminating hiring/promotion practices.
Mark my words, most of the companies rolling back are going to stop at surface level changes. Walmart did the same shit, while gloating about not stopping its work place practices.
You are going to see more moves like this, while in the background they continue DIE/BRIDGE/Marxist policies in the background.

Then again, this could just be the start of actual change that will lead to better hiring policies.
 
For those who don't want to watch the video:

More Speech and Fewer Mistakes​

January 7, 2025
Joel Kaplan, Chief Global Affairs Officer

Takeaways​

  • Starting in the US, we are ending our third party fact-checking program and moving to a Community Notes model.
  • We will allow more speech by lifting restrictions on some topics that are part of mainstream discourse and focusing our enforcement on illegal and high-severity violations.
  • We will take a more personalized approach to political content, so that people who want to see more of it in their feeds can.

Meta’s platforms are built to be places where people can express themselves freely. That can be messy. On platforms where billions of people can have a voice, all the good, bad and ugly is on display. But that’s free expression.

In his 2019 speech at Georgetown University, Mark Zuckerberg argued that free expression has been the driving force behind progress in American society and around the world and that inhibiting speech, however well-intentioned the reasons for doing so, often reinforces existing institutions and power structures instead of empowering people. He said: “Some people believe giving more people a voice is driving division rather than bringing us together. More people across the spectrum believe that achieving the political outcomes they think matter is more important than every person having a voice. I think that’s dangerous.”

In recent years we’ve developed increasingly complex systems to manage content across our platforms, partly in response to societal and political pressure to moderate content. This approach has gone too far. As well-intentioned as many of these efforts have been, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable. Too much harmless content gets censored, too many people find themselves wrongly locked up in “Facebook jail,” and we are often too slow to respond when they do.

We want to fix that and return to that fundamental commitment to free expression. Today, we’re making some changes to stay true to that ideal.

Ending Third Party Fact Checking Program, Moving to Community Notes​

When we launched our independent fact checking program in 2016, we were very clear that we didn’t want to be the arbiters of truth. We made what we thought was the best and most reasonable choice at the time, which was to hand that responsibility over to independent fact checking organizations. The intention of the program was to have these independent experts give people more information about the things they see online, particularly viral hoaxes, so they were able to judge for themselves what they saw and read.

That’s not the way things played out, especially in the United States. Experts, like everyone else, have their own biases and perspectives. This showed up in the choices some made about what to fact check and how. Over time we ended up with too much content being fact checked that people would understand to be legitimate political speech and debate. Our system then attached real consequences in the form of intrusive labels and reduced distribution. A program intended to inform too often became a tool to censor.

We are now changing this approach. We will end the current third party fact checking program in the United States and instead begin moving to a Community Notes program. We’ve seen this approach work on X – where they empower their community to decide when posts are potentially misleading and need more context, and people across a diverse range of perspectives decide what sort of context is helpful for other users to see. We think this could be a better way of achieving our original intention of providing people with information about what they’re seeing – and one that’s less prone to bias.
  • Once the program is up and running, Meta won’t write Community Notes or decide which ones show up. They are written and rated by contributing users.
  • Just like they do on X, Community Notes will require agreement between people with a range of perspectives to help prevent biased ratings.
  • We intend to be transparent about how different viewpoints inform the Notes displayed in our apps, and are working on the right way to share this information.
  • People can sign up today (Facebook, Instagram, Threads) for the opportunity to be among the first contributors to this program as it becomes available.
We plan to phase in Community Notes in the US first over the next couple of months, and will continue to improve it over the course of the year. As we make the transition, we will get rid of our fact-checking control, stop demoting fact checked content and, instead of overlaying full screen interstitial warnings you have to click through before you can even see the post, we will use a much less obtrusive label indicating that there is additional information for those who want to see it.

Allowing More Speech​

Over time, we have developed complex systems to manage content on our platforms, which are increasingly complicated for us to enforce. As a result, we have been over-enforcing our rules, limiting legitimate political debate and censoring too much trivial content and subjecting too many people to frustrating enforcement actions.

For example, in December 2024, we removed millions of pieces of content every day. While these actions account for less than 1% of content produced every day, we think one to two out of every 10 of these actions may have been mistakes (i.e., the content may not have actually violated our policies). This does not account for actions we take to tackle large-scale adversarial spam attacks. We plan to expand our transparency reporting to share numbers on our mistakes on a regular basis so that people can track our progress. As part of that we’ll also include more details on the mistakes we make when enforcing our spam policies.

We want to undo the mission creep that has made our rules too restrictive and too prone to over-enforcement. We’re getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress, but not on our platforms. These policy changes may take a few weeks to be fully implemented.

We’re also going to change how we enforce our policies to reduce the kind of mistakes that account for the vast majority of the censorship on our platforms. Up until now, we have been using automated systems to scan for all policy violations, but this has resulted in too many mistakes and too much content being censored that shouldn’t have been. So, we’re going to continue to focus these systems on tackling illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams. For less severe policy violations, we’re going to rely on someone reporting an issue before we take any action. We also demote too much content that our systems predict might violate our standards. We are in the process of getting rid of most of these demotions and requiring greater confidence that the content violates for the rest. And we’re going to tune our systems to require a much higher degree of confidence before a piece of content is taken down. As part of these changes, we will be moving the trust and safety teams that write our content policies and review content out of California to Texas and other US locations.

People are often given the chance to appeal our enforcement decisions and ask us to take another look, but the process can be frustratingly slow and doesn’t always get to the right outcome. We’ve added extra staff to this work and in more cases, we are also now requiring multiple reviewers to reach a determination in order to take something down. We are working on ways to make recovering accounts more straightforward and testing facial recognition technology, and we’ve started using AI large language models (LLMs) to provide a second opinion on some content before we take enforcement actions.

A Personalized Approach to Political Content​

Since 2021, we’ve made changes to reduce the amount of civic content people see – posts about elections, politics or social issues – based on the feedback our users gave us that they wanted to see less of this content. But this was a pretty blunt approach. We are going to start phasing this back into Facebook, Instagram and Threads with a more personalized approach so that people who want to see more political content in their feeds can.

We’re continually testing how we deliver personalized experiences and have recently conducted testing around civic content. As a result, we’re going to start treating civic content from people and Pages you follow on Facebook more like any other content in your feed, and we will start ranking and showing you that content based on explicit signals (for example, liking a piece of content) and implicit signals (like viewing posts) that help us predict what’s meaningful to people. We are also going to recommend more political content based on these personalized signals and are expanding the options people have to control how much of this content they see.

These changes are an attempt to return to the commitment to free expression that Mark Zuckerberg set out in his Georgetown speech. That means being vigilant about the impact our policies and systems are having on people’s ability to make their voices heard, and having the humility to change our approach when we know we’re getting things wrong.
Source (Archive)
 
@biglurker
The federal reserve needs to stop being a place, run by a table of private bankers. With the force of the state, behind their private monopoly. These private bankers then give their banks money. That is what M1 is. They can then lend that out fractionally. They even lend this new money (that private banks make) at a higher interest than the free money given to them by the Fed. The government should have 1 bank, which all people can access. This is how investment bankers control everything. They get the money to do it for free.
 
I think his hair is past the point of trying to keep it. He's not in dome territory, sure, but it's overall so thin (in terms of number of strands) that even with an only slightly-worse-than-average hairline, it's not fully succeeding in creating the illusion of a full head, and this is on edited video. It probably looks like a whole head of stringy 80s bangs in person. He needs to go crew cut. It would honestly look all right - he's not as ugly as everyone says, and he's stayed a healthy weight all his life.
 
>moving human moderators to Texas
Let me guess, the moderation jeets at Facebook (who have it out for meme page admins, mind) are going to cook in Satan's scrotum for a mere reduction of Facebook's operation cost.

If it’s half as full of screaming nutcases and rainbow haired gender respecters, not much will change.
Don't forget the baby boomers that only understand stuff that cable or evening news rattles on about...
 
I wonder if you can now use the Power Word on Facebook....

This already happens. People won't believe me with this unless they're in similar groups but Facebook is far more based in user demographics than other social media platforms.

Screenshot_20250107_190627_Facebook.jpgScreenshot_20250107_190038_Facebook.jpgScreenshot_20250107_190148_Facebook.jpgScreenshot_20250107_190227_Facebook.jpgScreenshot_20250107_190310_Facebook.jpgScreenshot_20250107_190357_Facebook.jpgScreenshot_20250107_190429_Facebook.jpgScreenshot_20250107_190450_Facebook.jpgScreenshot_20250107_190510_Facebook.jpg
 

Attachments

  • Screenshot_20250107_190603_Facebook.jpg
    Screenshot_20250107_190603_Facebook.jpg
    157.4 KB · Views: 64
Back